Oct 31 00:44:49.705408 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 31 00:44:49.705443 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu Oct 30 23:38:01 -00 2025 Oct 31 00:44:49.705451 kernel: efi: EFI v2.70 by EDK II Oct 31 00:44:49.705459 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 31 00:44:49.705465 kernel: random: crng init done Oct 31 00:44:49.705523 kernel: ACPI: Early table checksum verification disabled Oct 31 00:44:49.705530 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 31 00:44:49.705538 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 31 00:44:49.705544 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:44:49.705549 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:44:49.705589 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:44:49.705595 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:44:49.705600 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:44:49.705606 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:44:49.705618 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:44:49.705625 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:44:49.705632 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:44:49.705639 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 31 00:44:49.705646 kernel: NUMA: Failed to initialise from firmware Oct 31 00:44:49.705653 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 00:44:49.705659 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Oct 31 00:44:49.705664 kernel: Zone ranges: Oct 31 00:44:49.705670 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 00:44:49.705677 kernel: DMA32 empty Oct 31 00:44:49.705683 kernel: Normal empty Oct 31 00:44:49.705688 kernel: Movable zone start for each node Oct 31 00:44:49.705694 kernel: Early memory node ranges Oct 31 00:44:49.705700 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 31 00:44:49.705705 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 31 00:44:49.705711 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 31 00:44:49.705717 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 31 00:44:49.705722 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 31 00:44:49.705728 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 31 00:44:49.705733 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 31 00:44:49.705739 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 00:44:49.705746 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 31 00:44:49.705751 kernel: psci: probing for conduit method from ACPI. Oct 31 00:44:49.705757 kernel: psci: PSCIv1.1 detected in firmware. Oct 31 00:44:49.705762 kernel: psci: Using standard PSCI v0.2 function IDs Oct 31 00:44:49.705768 kernel: psci: Trusted OS migration not required Oct 31 00:44:49.705776 kernel: psci: SMC Calling Convention v1.1 Oct 31 00:44:49.705782 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 31 00:44:49.705789 kernel: ACPI: SRAT not present Oct 31 00:44:49.705796 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Oct 31 00:44:49.705802 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Oct 31 00:44:49.705808 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 31 00:44:49.705814 kernel: Detected PIPT I-cache on CPU0 Oct 31 00:44:49.705820 kernel: CPU features: detected: GIC system register CPU interface Oct 31 00:44:49.705826 kernel: CPU features: detected: Hardware dirty bit management Oct 31 00:44:49.705832 kernel: CPU features: detected: Spectre-v4 Oct 31 00:44:49.705839 kernel: CPU features: detected: Spectre-BHB Oct 31 00:44:49.705847 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 31 00:44:49.705853 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 31 00:44:49.705859 kernel: CPU features: detected: ARM erratum 1418040 Oct 31 00:44:49.705865 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 31 00:44:49.705871 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 31 00:44:49.705877 kernel: Policy zone: DMA Oct 31 00:44:49.705897 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c54831d8f121b00ec4768e5b1793fd4b2eb83931891a70a1aede21bf2f1a9635 Oct 31 00:44:49.705904 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 31 00:44:49.705910 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 00:44:49.705916 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 00:44:49.705923 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 00:44:49.705945 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Oct 31 00:44:49.705951 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 00:44:49.705957 kernel: trace event string verifier disabled Oct 31 00:44:49.705963 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 00:44:49.705970 kernel: rcu: RCU event tracing is enabled. Oct 31 00:44:49.705976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 00:44:49.706020 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 00:44:49.706029 kernel: Tracing variant of Tasks RCU enabled. Oct 31 00:44:49.706035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 00:44:49.706041 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 00:44:49.706047 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 31 00:44:49.706056 kernel: GICv3: 256 SPIs implemented Oct 31 00:44:49.706062 kernel: GICv3: 0 Extended SPIs implemented Oct 31 00:44:49.706068 kernel: GICv3: Distributor has no Range Selector support Oct 31 00:44:49.706074 kernel: Root IRQ handler: gic_handle_irq Oct 31 00:44:49.706080 kernel: GICv3: 16 PPIs implemented Oct 31 00:44:49.706086 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 31 00:44:49.706092 kernel: ACPI: SRAT not present Oct 31 00:44:49.706098 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 31 00:44:49.706104 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 31 00:44:49.706175 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 31 00:44:49.706184 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 31 00:44:49.706190 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 31 00:44:49.706199 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 00:44:49.706205 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 31 00:44:49.706211 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 31 00:44:49.706218 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 31 00:44:49.706224 kernel: arm-pv: using stolen time PV Oct 31 00:44:49.706230 kernel: Console: colour dummy device 80x25 Oct 31 00:44:49.706237 kernel: ACPI: Core revision 20210730 Oct 31 00:44:49.706243 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 31 00:44:49.706250 kernel: pid_max: default: 32768 minimum: 301 Oct 31 00:44:49.706256 kernel: LSM: Security Framework initializing Oct 31 00:44:49.706264 kernel: SELinux: Initializing. Oct 31 00:44:49.706270 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:44:49.706276 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:44:49.706282 kernel: rcu: Hierarchical SRCU implementation. Oct 31 00:44:49.706288 kernel: Platform MSI: ITS@0x8080000 domain created Oct 31 00:44:49.707455 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 31 00:44:49.707498 kernel: Remapping and enabling EFI services. Oct 31 00:44:49.707506 kernel: smp: Bringing up secondary CPUs ... Oct 31 00:44:49.707513 kernel: Detected PIPT I-cache on CPU1 Oct 31 00:44:49.707525 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 31 00:44:49.707532 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 31 00:44:49.707539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 00:44:49.707545 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 31 00:44:49.707551 kernel: Detected PIPT I-cache on CPU2 Oct 31 00:44:49.707558 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 31 00:44:49.707564 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 31 00:44:49.707655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 00:44:49.707663 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 31 00:44:49.707669 kernel: Detected PIPT I-cache on CPU3 Oct 31 00:44:49.707680 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 31 00:44:49.707687 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 31 00:44:49.707693 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 00:44:49.707700 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 31 00:44:49.707710 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 00:44:49.707718 kernel: SMP: Total of 4 processors activated. Oct 31 00:44:49.707725 kernel: CPU features: detected: 32-bit EL0 Support Oct 31 00:44:49.707732 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 31 00:44:49.707738 kernel: CPU features: detected: Common not Private translations Oct 31 00:44:49.707745 kernel: CPU features: detected: CRC32 instructions Oct 31 00:44:49.707752 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 31 00:44:49.707759 kernel: CPU features: detected: LSE atomic instructions Oct 31 00:44:49.707767 kernel: CPU features: detected: Privileged Access Never Oct 31 00:44:49.707774 kernel: CPU features: detected: RAS Extension Support Oct 31 00:44:49.707781 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 31 00:44:49.707787 kernel: CPU: All CPU(s) started at EL1 Oct 31 00:44:49.707831 kernel: alternatives: patching kernel code Oct 31 00:44:49.707842 kernel: devtmpfs: initialized Oct 31 00:44:49.707849 kernel: KASLR enabled Oct 31 00:44:49.707856 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 00:44:49.707863 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 00:44:49.707869 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 00:44:49.707876 kernel: SMBIOS 3.0.0 present. Oct 31 00:44:49.707883 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 31 00:44:49.707890 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 00:44:49.707896 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 31 00:44:49.707905 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 31 00:44:49.707911 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 31 00:44:49.707918 kernel: audit: initializing netlink subsys (disabled) Oct 31 00:44:49.707925 kernel: audit: type=2000 audit(0.036:1): state=initialized audit_enabled=0 res=1 Oct 31 00:44:49.707932 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 00:44:49.707938 kernel: cpuidle: using governor menu Oct 31 00:44:49.707945 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 31 00:44:49.707952 kernel: ASID allocator initialised with 32768 entries Oct 31 00:44:49.707958 kernel: ACPI: bus type PCI registered Oct 31 00:44:49.707966 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 00:44:49.707973 kernel: Serial: AMBA PL011 UART driver Oct 31 00:44:49.707980 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 00:44:49.707986 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 31 00:44:49.707993 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 00:44:49.708000 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 31 00:44:49.708007 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 00:44:49.708014 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 31 00:44:49.708021 kernel: ACPI: Added _OSI(Module Device) Oct 31 00:44:49.708029 kernel: ACPI: Added _OSI(Processor Device) Oct 31 00:44:49.708036 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 00:44:49.708043 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 31 00:44:49.708049 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 31 00:44:49.708056 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 31 00:44:49.708063 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 00:44:49.708069 kernel: ACPI: Interpreter enabled Oct 31 00:44:49.708076 kernel: ACPI: Using GIC for interrupt routing Oct 31 00:44:49.708083 kernel: ACPI: MCFG table detected, 1 entries Oct 31 00:44:49.708091 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 31 00:44:49.708097 kernel: printk: console [ttyAMA0] enabled Oct 31 00:44:49.708139 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 00:44:49.708279 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 00:44:49.708398 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 31 00:44:49.708731 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 31 00:44:49.708994 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 31 00:44:49.709071 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 31 00:44:49.709081 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 31 00:44:49.709087 kernel: PCI host bridge to bus 0000:00 Oct 31 00:44:49.709157 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 31 00:44:49.709213 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 31 00:44:49.709266 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 31 00:44:49.709578 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 00:44:49.710211 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 31 00:44:49.710808 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 31 00:44:49.712277 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 31 00:44:49.712381 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 31 00:44:49.712461 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 31 00:44:49.712527 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 31 00:44:49.712591 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 31 00:44:49.712661 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 31 00:44:49.712721 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 31 00:44:49.712776 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 31 00:44:49.712879 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 31 00:44:49.712890 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 31 00:44:49.712898 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 31 00:44:49.712905 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 31 00:44:49.712915 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 31 00:44:49.712922 kernel: iommu: Default domain type: Translated Oct 31 00:44:49.712929 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 31 00:44:49.712935 kernel: vgaarb: loaded Oct 31 00:44:49.712942 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 31 00:44:49.712949 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 31 00:44:49.712956 kernel: PTP clock support registered Oct 31 00:44:49.712963 kernel: Registered efivars operations Oct 31 00:44:49.712970 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 31 00:44:49.712976 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 00:44:49.712987 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 00:44:49.712994 kernel: pnp: PnP ACPI init Oct 31 00:44:49.713131 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 31 00:44:49.713145 kernel: pnp: PnP ACPI: found 1 devices Oct 31 00:44:49.713152 kernel: NET: Registered PF_INET protocol family Oct 31 00:44:49.713159 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 00:44:49.713166 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 00:44:49.713172 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 00:44:49.713182 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 00:44:49.713190 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 31 00:44:49.713197 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 00:44:49.713203 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:44:49.713210 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:44:49.713217 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 00:44:49.713223 kernel: PCI: CLS 0 bytes, default 64 Oct 31 00:44:49.713230 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 31 00:44:49.713237 kernel: kvm [1]: HYP mode not available Oct 31 00:44:49.713245 kernel: Initialise system trusted keyrings Oct 31 00:44:49.713251 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 00:44:49.713258 kernel: Key type asymmetric registered Oct 31 00:44:49.713265 kernel: Asymmetric key parser 'x509' registered Oct 31 00:44:49.713272 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 31 00:44:49.713278 kernel: io scheduler mq-deadline registered Oct 31 00:44:49.713285 kernel: io scheduler kyber registered Oct 31 00:44:49.713292 kernel: io scheduler bfq registered Oct 31 00:44:49.713302 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 31 00:44:49.713311 kernel: ACPI: button: Power Button [PWRB] Oct 31 00:44:49.713321 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 31 00:44:49.713400 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 31 00:44:49.713410 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 00:44:49.713438 kernel: thunder_xcv, ver 1.0 Oct 31 00:44:49.713445 kernel: thunder_bgx, ver 1.0 Oct 31 00:44:49.713452 kernel: nicpf, ver 1.0 Oct 31 00:44:49.713459 kernel: nicvf, ver 1.0 Oct 31 00:44:49.713533 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 31 00:44:49.713593 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-31T00:44:49 UTC (1761871489) Oct 31 00:44:49.713602 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 31 00:44:49.713609 kernel: NET: Registered PF_INET6 protocol family Oct 31 00:44:49.713615 kernel: Segment Routing with IPv6 Oct 31 00:44:49.713622 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 00:44:49.713629 kernel: NET: Registered PF_PACKET protocol family Oct 31 00:44:49.713635 kernel: Key type dns_resolver registered Oct 31 00:44:49.713642 kernel: registered taskstats version 1 Oct 31 00:44:49.713650 kernel: Loading compiled-in X.509 certificates Oct 31 00:44:49.713657 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: e62237f95ba4ddc0e942e4538fe1019cd3c2f62a' Oct 31 00:44:49.713663 kernel: Key type .fscrypt registered Oct 31 00:44:49.713669 kernel: Key type fscrypt-provisioning registered Oct 31 00:44:49.713676 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 00:44:49.713683 kernel: ima: Allocated hash algorithm: sha1 Oct 31 00:44:49.713689 kernel: ima: No architecture policies found Oct 31 00:44:49.713696 kernel: clk: Disabling unused clocks Oct 31 00:44:49.713702 kernel: Freeing unused kernel memory: 36416K Oct 31 00:44:49.713710 kernel: Run /init as init process Oct 31 00:44:49.713717 kernel: with arguments: Oct 31 00:44:49.713724 kernel: /init Oct 31 00:44:49.713730 kernel: with environment: Oct 31 00:44:49.713736 kernel: HOME=/ Oct 31 00:44:49.713743 kernel: TERM=linux Oct 31 00:44:49.713750 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 31 00:44:49.713759 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 00:44:49.713769 systemd[1]: Detected virtualization kvm. Oct 31 00:44:49.713776 systemd[1]: Detected architecture arm64. Oct 31 00:44:49.713783 systemd[1]: Running in initrd. Oct 31 00:44:49.713790 systemd[1]: No hostname configured, using default hostname. Oct 31 00:44:49.713797 systemd[1]: Hostname set to . Oct 31 00:44:49.713805 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:44:49.713812 systemd[1]: Queued start job for default target initrd.target. Oct 31 00:44:49.713819 systemd[1]: Started systemd-ask-password-console.path. Oct 31 00:44:49.713827 systemd[1]: Reached target cryptsetup.target. Oct 31 00:44:49.713834 systemd[1]: Reached target paths.target. Oct 31 00:44:49.713841 systemd[1]: Reached target slices.target. Oct 31 00:44:49.713849 systemd[1]: Reached target swap.target. Oct 31 00:44:49.713856 systemd[1]: Reached target timers.target. Oct 31 00:44:49.713863 systemd[1]: Listening on iscsid.socket. Oct 31 00:44:49.713870 systemd[1]: Listening on iscsiuio.socket. Oct 31 00:44:49.713885 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 00:44:49.713892 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 00:44:49.713899 systemd[1]: Listening on systemd-journald.socket. Oct 31 00:44:49.713907 systemd[1]: Listening on systemd-networkd.socket. Oct 31 00:44:49.713914 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 00:44:49.713921 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 00:44:49.713928 systemd[1]: Reached target sockets.target. Oct 31 00:44:49.713936 systemd[1]: Starting kmod-static-nodes.service... Oct 31 00:44:49.713943 systemd[1]: Finished network-cleanup.service. Oct 31 00:44:49.713952 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 00:44:49.713959 systemd[1]: Starting systemd-journald.service... Oct 31 00:44:49.713966 systemd[1]: Starting systemd-modules-load.service... Oct 31 00:44:49.713973 systemd[1]: Starting systemd-resolved.service... Oct 31 00:44:49.713980 systemd[1]: Starting systemd-vconsole-setup.service... Oct 31 00:44:49.713988 systemd[1]: Finished kmod-static-nodes.service. Oct 31 00:44:49.713995 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 00:44:49.714002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 00:44:49.714009 systemd[1]: Finished systemd-vconsole-setup.service. Oct 31 00:44:49.714018 kernel: audit: type=1130 audit(1761871489.707:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.714026 systemd[1]: Starting dracut-cmdline-ask.service... Oct 31 00:44:49.714033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 00:44:49.714044 systemd-journald[290]: Journal started Oct 31 00:44:49.714090 systemd-journald[290]: Runtime Journal (/run/log/journal/a49e375400a340e4b6aa9e6ca4402a74) is 6.0M, max 48.7M, 42.6M free. Oct 31 00:44:49.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.701125 systemd-modules-load[291]: Inserted module 'overlay' Oct 31 00:44:49.719889 kernel: audit: type=1130 audit(1761871489.714:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.719911 systemd[1]: Started systemd-journald.service. Oct 31 00:44:49.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.722067 systemd-resolved[292]: Positive Trust Anchors: Oct 31 00:44:49.730215 kernel: audit: type=1130 audit(1761871489.720:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.722081 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:44:49.734668 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 00:44:49.722109 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 00:44:49.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.728251 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 31 00:44:49.745508 kernel: audit: type=1130 audit(1761871489.732:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.745530 kernel: Bridge firewalling registered Oct 31 00:44:49.729182 systemd[1]: Started systemd-resolved.service. Oct 31 00:44:49.733526 systemd[1]: Reached target nss-lookup.target. Oct 31 00:44:49.745516 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 31 00:44:49.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.751441 kernel: audit: type=1130 audit(1761871489.747:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.746626 systemd[1]: Finished dracut-cmdline-ask.service. Oct 31 00:44:49.748783 systemd[1]: Starting dracut-cmdline.service... Oct 31 00:44:49.758393 dracut-cmdline[310]: dracut-dracut-053 Oct 31 00:44:49.759286 kernel: SCSI subsystem initialized Oct 31 00:44:49.760750 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c54831d8f121b00ec4768e5b1793fd4b2eb83931891a70a1aede21bf2f1a9635 Oct 31 00:44:49.767704 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 00:44:49.767740 kernel: device-mapper: uevent: version 1.0.3 Oct 31 00:44:49.768755 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 31 00:44:49.771109 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 31 00:44:49.771961 systemd[1]: Finished systemd-modules-load.service. Oct 31 00:44:49.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.773625 systemd[1]: Starting systemd-sysctl.service... Oct 31 00:44:49.777358 kernel: audit: type=1130 audit(1761871489.772:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.781805 systemd[1]: Finished systemd-sysctl.service. Oct 31 00:44:49.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.786498 kernel: audit: type=1130 audit(1761871489.782:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.823445 kernel: Loading iSCSI transport class v2.0-870. Oct 31 00:44:49.835470 kernel: iscsi: registered transport (tcp) Oct 31 00:44:49.850465 kernel: iscsi: registered transport (qla4xxx) Oct 31 00:44:49.850517 kernel: QLogic iSCSI HBA Driver Oct 31 00:44:49.888134 systemd[1]: Finished dracut-cmdline.service. Oct 31 00:44:49.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.889794 systemd[1]: Starting dracut-pre-udev.service... Oct 31 00:44:49.893147 kernel: audit: type=1130 audit(1761871489.888:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:49.932452 kernel: raid6: neonx8 gen() 13617 MB/s Oct 31 00:44:49.949441 kernel: raid6: neonx8 xor() 10718 MB/s Oct 31 00:44:49.966447 kernel: raid6: neonx4 gen() 13412 MB/s Oct 31 00:44:49.983437 kernel: raid6: neonx4 xor() 11108 MB/s Oct 31 00:44:50.000443 kernel: raid6: neonx2 gen() 12916 MB/s Oct 31 00:44:50.017435 kernel: raid6: neonx2 xor() 10247 MB/s Oct 31 00:44:50.034451 kernel: raid6: neonx1 gen() 10423 MB/s Oct 31 00:44:50.051444 kernel: raid6: neonx1 xor() 8698 MB/s Oct 31 00:44:50.068449 kernel: raid6: int64x8 gen() 6196 MB/s Oct 31 00:44:50.085454 kernel: raid6: int64x8 xor() 3527 MB/s Oct 31 00:44:50.102458 kernel: raid6: int64x4 gen() 7148 MB/s Oct 31 00:44:50.119453 kernel: raid6: int64x4 xor() 3845 MB/s Oct 31 00:44:50.136449 kernel: raid6: int64x2 gen() 6118 MB/s Oct 31 00:44:50.153454 kernel: raid6: int64x2 xor() 3309 MB/s Oct 31 00:44:50.170455 kernel: raid6: int64x1 gen() 5021 MB/s Oct 31 00:44:50.187647 kernel: raid6: int64x1 xor() 2641 MB/s Oct 31 00:44:50.187699 kernel: raid6: using algorithm neonx8 gen() 13617 MB/s Oct 31 00:44:50.187709 kernel: raid6: .... xor() 10718 MB/s, rmw enabled Oct 31 00:44:50.188802 kernel: raid6: using neon recovery algorithm Oct 31 00:44:50.199500 kernel: xor: measuring software checksum speed Oct 31 00:44:50.199534 kernel: 8regs : 17206 MB/sec Oct 31 00:44:50.200762 kernel: 32regs : 20665 MB/sec Oct 31 00:44:50.200777 kernel: arm64_neon : 27747 MB/sec Oct 31 00:44:50.200786 kernel: xor: using function: arm64_neon (27747 MB/sec) Oct 31 00:44:50.255460 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 31 00:44:50.265867 systemd[1]: Finished dracut-pre-udev.service. Oct 31 00:44:50.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:50.269000 audit: BPF prog-id=7 op=LOAD Oct 31 00:44:50.269000 audit: BPF prog-id=8 op=LOAD Oct 31 00:44:50.270456 kernel: audit: type=1130 audit(1761871490.266:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:50.270453 systemd[1]: Starting systemd-udevd.service... Oct 31 00:44:50.283897 systemd-udevd[491]: Using default interface naming scheme 'v252'. Oct 31 00:44:50.287261 systemd[1]: Started systemd-udevd.service. Oct 31 00:44:50.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:50.288924 systemd[1]: Starting dracut-pre-trigger.service... Oct 31 00:44:50.302717 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Oct 31 00:44:50.329831 systemd[1]: Finished dracut-pre-trigger.service. Oct 31 00:44:50.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:50.331527 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 00:44:50.366695 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 00:44:50.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:50.399451 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 31 00:44:50.404234 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 00:44:50.404251 kernel: GPT:9289727 != 19775487 Oct 31 00:44:50.404259 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 00:44:50.404269 kernel: GPT:9289727 != 19775487 Oct 31 00:44:50.404277 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 00:44:50.404286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:44:50.424441 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (556) Oct 31 00:44:50.425767 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 31 00:44:50.433476 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 31 00:44:50.435523 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 31 00:44:50.439791 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 00:44:50.443140 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 31 00:44:50.445001 systemd[1]: Starting disk-uuid.service... Oct 31 00:44:50.453542 disk-uuid[564]: Primary Header is updated. Oct 31 00:44:50.453542 disk-uuid[564]: Secondary Entries is updated. Oct 31 00:44:50.453542 disk-uuid[564]: Secondary Header is updated. Oct 31 00:44:50.456736 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:44:51.464453 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:44:51.464504 disk-uuid[565]: The operation has completed successfully. Oct 31 00:44:51.516343 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 00:44:51.516532 systemd[1]: Finished disk-uuid.service. Oct 31 00:44:51.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.529563 systemd[1]: Starting verity-setup.service... Oct 31 00:44:51.542456 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 31 00:44:51.565495 systemd[1]: Found device dev-mapper-usr.device. Oct 31 00:44:51.568017 systemd[1]: Mounting sysusr-usr.mount... Oct 31 00:44:51.570839 systemd[1]: Finished verity-setup.service. Oct 31 00:44:51.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.619448 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 31 00:44:51.619632 systemd[1]: Mounted sysusr-usr.mount. Oct 31 00:44:51.620472 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 31 00:44:51.621181 systemd[1]: Starting ignition-setup.service... Oct 31 00:44:51.623885 systemd[1]: Starting parse-ip-for-networkd.service... Oct 31 00:44:51.634034 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 00:44:51.634076 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:44:51.634086 kernel: BTRFS info (device vda6): has skinny extents Oct 31 00:44:51.643094 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 00:44:51.651308 systemd[1]: Finished ignition-setup.service. Oct 31 00:44:51.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.653026 systemd[1]: Starting ignition-fetch-offline.service... Oct 31 00:44:51.719192 systemd[1]: Finished parse-ip-for-networkd.service. Oct 31 00:44:51.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.721000 audit: BPF prog-id=9 op=LOAD Oct 31 00:44:51.722460 systemd[1]: Starting systemd-networkd.service... Oct 31 00:44:51.731791 ignition[654]: Ignition 2.14.0 Oct 31 00:44:51.731803 ignition[654]: Stage: fetch-offline Oct 31 00:44:51.731844 ignition[654]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:44:51.731854 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:44:51.731992 ignition[654]: parsed url from cmdline: "" Oct 31 00:44:51.731995 ignition[654]: no config URL provided Oct 31 00:44:51.731999 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 00:44:51.732006 ignition[654]: no config at "/usr/lib/ignition/user.ign" Oct 31 00:44:51.732026 ignition[654]: op(1): [started] loading QEMU firmware config module Oct 31 00:44:51.732030 ignition[654]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 00:44:51.736744 ignition[654]: op(1): [finished] loading QEMU firmware config module Oct 31 00:44:51.743117 systemd-networkd[740]: lo: Link UP Oct 31 00:44:51.743130 systemd-networkd[740]: lo: Gained carrier Oct 31 00:44:51.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.743556 systemd-networkd[740]: Enumeration completed Oct 31 00:44:51.743747 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:44:51.743868 systemd[1]: Started systemd-networkd.service. Oct 31 00:44:51.744961 systemd[1]: Reached target network.target. Oct 31 00:44:51.747076 systemd-networkd[740]: eth0: Link UP Oct 31 00:44:51.747080 systemd-networkd[740]: eth0: Gained carrier Oct 31 00:44:51.748055 systemd[1]: Starting iscsiuio.service... Oct 31 00:44:51.755336 systemd[1]: Started iscsiuio.service. Oct 31 00:44:51.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.757475 systemd[1]: Starting iscsid.service... Oct 31 00:44:51.760946 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 31 00:44:51.760946 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 31 00:44:51.760946 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 31 00:44:51.760946 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 31 00:44:51.760946 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 31 00:44:51.760946 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 31 00:44:51.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.764763 systemd[1]: Started iscsid.service. Oct 31 00:44:51.768507 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:44:51.770603 systemd[1]: Starting dracut-initqueue.service... Oct 31 00:44:51.780711 systemd[1]: Finished dracut-initqueue.service. Oct 31 00:44:51.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.781740 systemd[1]: Reached target remote-fs-pre.target. Oct 31 00:44:51.783247 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 00:44:51.785111 systemd[1]: Reached target remote-fs.target. Oct 31 00:44:51.787938 systemd[1]: Starting dracut-pre-mount.service... Oct 31 00:44:51.795892 systemd[1]: Finished dracut-pre-mount.service. Oct 31 00:44:51.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.816456 ignition[654]: parsing config with SHA512: 5b6bdf272f03b8a57dca4ebb3f9a24a5fb8bcfa8077ef4da7243bbd2681ba11e0e41e844a60833b29a2748d4ed6fbc76ff831a29b3a2267935fc8319367629c8 Oct 31 00:44:51.824547 unknown[654]: fetched base config from "system" Oct 31 00:44:51.825162 ignition[654]: fetch-offline: fetch-offline passed Oct 31 00:44:51.824558 unknown[654]: fetched user config from "qemu" Oct 31 00:44:51.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.825229 ignition[654]: Ignition finished successfully Oct 31 00:44:51.826568 systemd[1]: Finished ignition-fetch-offline.service. Oct 31 00:44:51.828347 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 00:44:51.829273 systemd[1]: Starting ignition-kargs.service... Oct 31 00:44:51.839117 ignition[761]: Ignition 2.14.0 Oct 31 00:44:51.839128 ignition[761]: Stage: kargs Oct 31 00:44:51.839232 ignition[761]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:44:51.841783 systemd[1]: Finished ignition-kargs.service. Oct 31 00:44:51.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.839241 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:44:51.840199 ignition[761]: kargs: kargs passed Oct 31 00:44:51.844084 systemd[1]: Starting ignition-disks.service... Oct 31 00:44:51.840242 ignition[761]: Ignition finished successfully Oct 31 00:44:51.851384 ignition[767]: Ignition 2.14.0 Oct 31 00:44:51.851395 ignition[767]: Stage: disks Oct 31 00:44:51.851511 ignition[767]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:44:51.853697 systemd[1]: Finished ignition-disks.service. Oct 31 00:44:51.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.851522 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:44:51.855299 systemd[1]: Reached target initrd-root-device.target. Oct 31 00:44:51.852606 ignition[767]: disks: disks passed Oct 31 00:44:51.856757 systemd[1]: Reached target local-fs-pre.target. Oct 31 00:44:51.852658 ignition[767]: Ignition finished successfully Oct 31 00:44:51.858581 systemd[1]: Reached target local-fs.target. Oct 31 00:44:51.860018 systemd[1]: Reached target sysinit.target. Oct 31 00:44:51.861309 systemd[1]: Reached target basic.target. Oct 31 00:44:51.863773 systemd[1]: Starting systemd-fsck-root.service... Oct 31 00:44:51.876766 systemd-fsck[775]: ROOT: clean, 637/553520 files, 56031/553472 blocks Oct 31 00:44:51.881715 systemd[1]: Finished systemd-fsck-root.service. Oct 31 00:44:51.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.885286 systemd[1]: Mounting sysroot.mount... Oct 31 00:44:51.893432 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 31 00:44:51.893850 systemd[1]: Mounted sysroot.mount. Oct 31 00:44:51.894671 systemd[1]: Reached target initrd-root-fs.target. Oct 31 00:44:51.897075 systemd[1]: Mounting sysroot-usr.mount... Oct 31 00:44:51.898069 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 31 00:44:51.898106 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 00:44:51.898129 systemd[1]: Reached target ignition-diskful.target. Oct 31 00:44:51.900093 systemd[1]: Mounted sysroot-usr.mount. Oct 31 00:44:51.902375 systemd[1]: Starting initrd-setup-root.service... Oct 31 00:44:51.907048 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 00:44:51.911010 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Oct 31 00:44:51.915977 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 00:44:51.920324 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 00:44:51.953932 systemd[1]: Finished initrd-setup-root.service. Oct 31 00:44:51.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.955939 systemd[1]: Starting ignition-mount.service... Oct 31 00:44:51.957476 systemd[1]: Starting sysroot-boot.service... Oct 31 00:44:51.962451 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Oct 31 00:44:51.972266 ignition[827]: INFO : Ignition 2.14.0 Oct 31 00:44:51.972266 ignition[827]: INFO : Stage: mount Oct 31 00:44:51.973847 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:44:51.973847 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:44:51.973847 ignition[827]: INFO : mount: mount passed Oct 31 00:44:51.973847 ignition[827]: INFO : Ignition finished successfully Oct 31 00:44:51.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:51.976156 systemd[1]: Finished ignition-mount.service. Oct 31 00:44:51.982009 systemd[1]: Finished sysroot-boot.service. Oct 31 00:44:51.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:52.580978 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 31 00:44:52.591368 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (836) Oct 31 00:44:52.591410 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 00:44:52.591426 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:44:52.592028 kernel: BTRFS info (device vda6): has skinny extents Oct 31 00:44:52.598511 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 31 00:44:52.600106 systemd[1]: Starting ignition-files.service... Oct 31 00:44:52.616637 ignition[856]: INFO : Ignition 2.14.0 Oct 31 00:44:52.616637 ignition[856]: INFO : Stage: files Oct 31 00:44:52.618118 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:44:52.618118 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:44:52.618118 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Oct 31 00:44:52.621597 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 00:44:52.621597 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 00:44:52.626684 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 00:44:52.628111 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 00:44:52.629830 unknown[856]: wrote ssh authorized keys file for user: core Oct 31 00:44:52.631086 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 00:44:52.631086 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 00:44:52.631086 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 00:44:52.631086 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 31 00:44:52.631086 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 31 00:44:52.731573 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 31 00:44:52.865503 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 31 00:44:52.865503 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 00:44:52.872364 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 31 00:44:53.286882 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 31 00:44:53.613473 systemd-networkd[740]: eth0: Gained IPv6LL Oct 31 00:44:53.650896 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 00:44:53.650896 ignition[856]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 00:44:53.661732 ignition[856]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:44:53.711992 ignition[856]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:44:53.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.716734 ignition[856]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 00:44:53.716734 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:44:53.716734 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:44:53.716734 ignition[856]: INFO : files: files passed Oct 31 00:44:53.716734 ignition[856]: INFO : Ignition finished successfully Oct 31 00:44:53.714683 systemd[1]: Finished ignition-files.service. Oct 31 00:44:53.716306 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 31 00:44:53.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.717237 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 31 00:44:53.731946 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 31 00:44:53.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.717938 systemd[1]: Starting ignition-quench.service... Oct 31 00:44:53.739519 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:44:53.726864 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 00:44:53.726961 systemd[1]: Finished ignition-quench.service. Oct 31 00:44:53.731068 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 31 00:44:53.732923 systemd[1]: Reached target ignition-complete.target. Oct 31 00:44:53.735647 systemd[1]: Starting initrd-parse-etc.service... Oct 31 00:44:53.754038 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 00:44:53.754162 systemd[1]: Finished initrd-parse-etc.service. Oct 31 00:44:53.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.755945 systemd[1]: Reached target initrd-fs.target. Oct 31 00:44:53.757188 systemd[1]: Reached target initrd.target. Oct 31 00:44:53.758474 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 31 00:44:53.759259 systemd[1]: Starting dracut-pre-pivot.service... Oct 31 00:44:53.776867 systemd[1]: Finished dracut-pre-pivot.service. Oct 31 00:44:53.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.778578 systemd[1]: Starting initrd-cleanup.service... Oct 31 00:44:53.787136 systemd[1]: Stopped target nss-lookup.target. Oct 31 00:44:53.788126 systemd[1]: Stopped target remote-cryptsetup.target. Oct 31 00:44:53.789610 systemd[1]: Stopped target timers.target. Oct 31 00:44:53.790983 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 00:44:53.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.791106 systemd[1]: Stopped dracut-pre-pivot.service. Oct 31 00:44:53.792447 systemd[1]: Stopped target initrd.target. Oct 31 00:44:53.793790 systemd[1]: Stopped target basic.target. Oct 31 00:44:53.795184 systemd[1]: Stopped target ignition-complete.target. Oct 31 00:44:53.796579 systemd[1]: Stopped target ignition-diskful.target. Oct 31 00:44:53.797895 systemd[1]: Stopped target initrd-root-device.target. Oct 31 00:44:53.799338 systemd[1]: Stopped target remote-fs.target. Oct 31 00:44:53.800745 systemd[1]: Stopped target remote-fs-pre.target. Oct 31 00:44:53.804851 systemd[1]: Stopped target sysinit.target. Oct 31 00:44:53.806158 systemd[1]: Stopped target local-fs.target. Oct 31 00:44:53.807489 systemd[1]: Stopped target local-fs-pre.target. Oct 31 00:44:53.808854 systemd[1]: Stopped target swap.target. Oct 31 00:44:53.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.810039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 00:44:53.810157 systemd[1]: Stopped dracut-pre-mount.service. Oct 31 00:44:53.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.811475 systemd[1]: Stopped target cryptsetup.target. Oct 31 00:44:53.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.812604 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 00:44:53.812707 systemd[1]: Stopped dracut-initqueue.service. Oct 31 00:44:53.814165 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 00:44:53.814265 systemd[1]: Stopped ignition-fetch-offline.service. Oct 31 00:44:53.815512 systemd[1]: Stopped target paths.target. Oct 31 00:44:53.816906 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 00:44:53.821255 systemd[1]: Stopped systemd-ask-password-console.path. Oct 31 00:44:53.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.822256 systemd[1]: Stopped target slices.target. Oct 31 00:44:53.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.823672 systemd[1]: Stopped target sockets.target. Oct 31 00:44:53.824958 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 00:44:53.831282 iscsid[746]: iscsid shutting down. Oct 31 00:44:53.825078 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 31 00:44:53.826843 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 00:44:53.826939 systemd[1]: Stopped ignition-files.service. Oct 31 00:44:53.829096 systemd[1]: Stopping ignition-mount.service... Oct 31 00:44:53.832492 systemd[1]: Stopping iscsid.service... Oct 31 00:44:53.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.834449 systemd[1]: Stopping sysroot-boot.service... Oct 31 00:44:53.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.835166 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 00:44:53.835304 systemd[1]: Stopped systemd-udev-trigger.service. Oct 31 00:44:53.836633 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 00:44:53.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.843690 ignition[896]: INFO : Ignition 2.14.0 Oct 31 00:44:53.843690 ignition[896]: INFO : Stage: umount Oct 31 00:44:53.843690 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:44:53.843690 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:44:53.843690 ignition[896]: INFO : umount: umount passed Oct 31 00:44:53.843690 ignition[896]: INFO : Ignition finished successfully Oct 31 00:44:53.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.836731 systemd[1]: Stopped dracut-pre-trigger.service. Oct 31 00:44:53.839862 systemd[1]: iscsid.service: Deactivated successfully. Oct 31 00:44:53.839959 systemd[1]: Stopped iscsid.service. Oct 31 00:44:53.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.843941 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 00:44:53.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.844022 systemd[1]: Closed iscsid.socket. Oct 31 00:44:53.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.845281 systemd[1]: Stopping iscsiuio.service... Oct 31 00:44:53.847733 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 00:44:53.847819 systemd[1]: Finished initrd-cleanup.service. Oct 31 00:44:53.850394 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 00:44:53.850773 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 31 00:44:53.850857 systemd[1]: Stopped iscsiuio.service. Oct 31 00:44:53.851796 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 00:44:53.851869 systemd[1]: Stopped ignition-mount.service. Oct 31 00:44:53.854079 systemd[1]: Stopped target network.target. Oct 31 00:44:53.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.854934 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 00:44:53.854975 systemd[1]: Closed iscsiuio.socket. Oct 31 00:44:53.856253 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 00:44:53.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.856298 systemd[1]: Stopped ignition-disks.service. Oct 31 00:44:53.858493 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 00:44:53.858539 systemd[1]: Stopped ignition-kargs.service. Oct 31 00:44:53.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.860330 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 00:44:53.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.860372 systemd[1]: Stopped ignition-setup.service. Oct 31 00:44:53.861956 systemd[1]: Stopping systemd-networkd.service... Oct 31 00:44:53.863258 systemd[1]: Stopping systemd-resolved.service... Oct 31 00:44:53.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.871180 systemd-networkd[740]: eth0: DHCPv6 lease lost Oct 31 00:44:53.889000 audit: BPF prog-id=9 op=UNLOAD Oct 31 00:44:53.872236 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 00:44:53.872341 systemd[1]: Stopped systemd-networkd.service. Oct 31 00:44:53.873914 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 00:44:53.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.894000 audit: BPF prog-id=6 op=UNLOAD Oct 31 00:44:53.873942 systemd[1]: Closed systemd-networkd.socket. Oct 31 00:44:53.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.875855 systemd[1]: Stopping network-cleanup.service... Oct 31 00:44:53.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.876640 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 00:44:53.876699 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 31 00:44:53.878094 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:44:53.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.878140 systemd[1]: Stopped systemd-sysctl.service. Oct 31 00:44:53.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.882778 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 00:44:53.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.882831 systemd[1]: Stopped systemd-modules-load.service. Oct 31 00:44:53.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.883892 systemd[1]: Stopping systemd-udevd.service... Oct 31 00:44:53.887790 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 31 00:44:53.888242 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 00:44:53.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.888346 systemd[1]: Stopped systemd-resolved.service. Oct 31 00:44:53.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.892384 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 00:44:53.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.892485 systemd[1]: Stopped network-cleanup.service. Oct 31 00:44:53.893917 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 00:44:53.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:53.893987 systemd[1]: Stopped sysroot-boot.service. Oct 31 00:44:53.895383 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 00:44:53.895515 systemd[1]: Stopped systemd-udevd.service. Oct 31 00:44:53.896731 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 00:44:53.896763 systemd[1]: Closed systemd-udevd-control.socket. Oct 31 00:44:53.897968 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 00:44:53.898002 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 31 00:44:53.899449 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 00:44:53.899496 systemd[1]: Stopped dracut-pre-udev.service. Oct 31 00:44:53.900863 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 00:44:53.900902 systemd[1]: Stopped dracut-cmdline.service. Oct 31 00:44:53.902320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:44:53.925000 audit: BPF prog-id=5 op=UNLOAD Oct 31 00:44:53.925000 audit: BPF prog-id=4 op=UNLOAD Oct 31 00:44:53.925000 audit: BPF prog-id=3 op=UNLOAD Oct 31 00:44:53.925000 audit: BPF prog-id=8 op=UNLOAD Oct 31 00:44:53.925000 audit: BPF prog-id=7 op=UNLOAD Oct 31 00:44:53.902363 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 31 00:44:53.903701 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 00:44:53.903742 systemd[1]: Stopped initrd-setup-root.service. Oct 31 00:44:53.906034 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 31 00:44:53.907375 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 00:44:53.907489 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 31 00:44:53.910144 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 00:44:53.910186 systemd[1]: Stopped kmod-static-nodes.service. Oct 31 00:44:53.911034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:44:53.911076 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 31 00:44:53.913173 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 31 00:44:53.913595 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 00:44:53.913677 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 31 00:44:53.915002 systemd[1]: Reached target initrd-switch-root.target. Oct 31 00:44:53.917114 systemd[1]: Starting initrd-switch-root.service... Oct 31 00:44:53.922856 systemd[1]: Switching root. Oct 31 00:44:53.942290 systemd-journald[290]: Journal stopped Oct 31 00:44:55.968623 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Oct 31 00:44:55.968682 kernel: SELinux: Class mctp_socket not defined in policy. Oct 31 00:44:55.968694 kernel: SELinux: Class anon_inode not defined in policy. Oct 31 00:44:55.968704 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 31 00:44:55.968713 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 00:44:55.968728 kernel: SELinux: policy capability open_perms=1 Oct 31 00:44:55.968738 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 00:44:55.968747 kernel: SELinux: policy capability always_check_network=0 Oct 31 00:44:55.968756 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 00:44:55.968772 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 00:44:55.968781 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 00:44:55.968791 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 00:44:55.968800 kernel: kauditd_printk_skb: 71 callbacks suppressed Oct 31 00:44:55.968810 kernel: audit: type=1403 audit(1761871494.030:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 00:44:55.968821 systemd[1]: Successfully loaded SELinux policy in 36.940ms. Oct 31 00:44:55.968836 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.319ms. Oct 31 00:44:55.968851 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 00:44:55.968878 systemd[1]: Detected virtualization kvm. Oct 31 00:44:55.968890 systemd[1]: Detected architecture arm64. Oct 31 00:44:55.968900 systemd[1]: Detected first boot. Oct 31 00:44:55.968911 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:44:55.968921 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 31 00:44:55.968935 kernel: audit: type=1400 audit(1761871494.186:83): avc: denied { associate } for pid=946 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Oct 31 00:44:55.968946 kernel: audit: type=1300 audit(1761871494.186:83): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c766c a1=40000caae0 a2=40000d0a00 a3=32 items=0 ppid=929 pid=946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:44:55.968956 kernel: audit: type=1327 audit(1761871494.186:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Oct 31 00:44:55.968968 kernel: audit: type=1400 audit(1761871494.187:84): avc: denied { associate } for pid=946 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Oct 31 00:44:55.968978 kernel: audit: type=1300 audit(1761871494.187:84): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c7745 a2=1ed a3=0 items=2 ppid=929 pid=946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:44:55.968988 kernel: audit: type=1307 audit(1761871494.187:84): cwd="/" Oct 31 00:44:55.968999 kernel: audit: type=1302 audit(1761871494.187:84): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:44:55.969010 kernel: audit: type=1302 audit(1761871494.187:84): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:44:55.969020 kernel: audit: type=1327 audit(1761871494.187:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Oct 31 00:44:55.969030 systemd[1]: Populated /etc with preset unit settings. Oct 31 00:44:55.969041 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 00:44:55.969051 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 00:44:55.969062 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:44:55.969074 systemd[1]: Queued start job for default target multi-user.target. Oct 31 00:44:55.969084 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 31 00:44:55.969094 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 31 00:44:55.969105 systemd[1]: Created slice system-addon\x2drun.slice. Oct 31 00:44:55.969115 systemd[1]: Created slice system-getty.slice. Oct 31 00:44:55.969126 systemd[1]: Created slice system-modprobe.slice. Oct 31 00:44:55.969137 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 31 00:44:55.969148 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 31 00:44:55.969159 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 31 00:44:55.969169 systemd[1]: Created slice user.slice. Oct 31 00:44:55.969178 systemd[1]: Started systemd-ask-password-console.path. Oct 31 00:44:55.969190 systemd[1]: Started systemd-ask-password-wall.path. Oct 31 00:44:55.969201 systemd[1]: Set up automount boot.automount. Oct 31 00:44:55.969211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 31 00:44:55.969223 systemd[1]: Reached target integritysetup.target. Oct 31 00:44:55.969235 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 00:44:55.969245 systemd[1]: Reached target remote-fs.target. Oct 31 00:44:55.969255 systemd[1]: Reached target slices.target. Oct 31 00:44:55.969266 systemd[1]: Reached target swap.target. Oct 31 00:44:55.969276 systemd[1]: Reached target torcx.target. Oct 31 00:44:55.969297 systemd[1]: Reached target veritysetup.target. Oct 31 00:44:55.969308 systemd[1]: Listening on systemd-coredump.socket. Oct 31 00:44:55.969319 systemd[1]: Listening on systemd-initctl.socket. Oct 31 00:44:55.969329 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 00:44:55.969341 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 00:44:55.969353 systemd[1]: Listening on systemd-journald.socket. Oct 31 00:44:55.969364 systemd[1]: Listening on systemd-networkd.socket. Oct 31 00:44:55.969374 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 00:44:55.969385 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 00:44:55.969396 systemd[1]: Listening on systemd-userdbd.socket. Oct 31 00:44:55.969406 systemd[1]: Mounting dev-hugepages.mount... Oct 31 00:44:55.969423 systemd[1]: Mounting dev-mqueue.mount... Oct 31 00:44:55.969435 systemd[1]: Mounting media.mount... Oct 31 00:44:55.969445 systemd[1]: Mounting sys-kernel-debug.mount... Oct 31 00:44:55.969457 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 31 00:44:55.969467 systemd[1]: Mounting tmp.mount... Oct 31 00:44:55.969478 systemd[1]: Starting flatcar-tmpfiles.service... Oct 31 00:44:55.969490 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:44:55.969501 systemd[1]: Starting kmod-static-nodes.service... Oct 31 00:44:55.969521 systemd[1]: Starting modprobe@configfs.service... Oct 31 00:44:55.969533 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:44:55.969544 systemd[1]: Starting modprobe@drm.service... Oct 31 00:44:55.969554 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:44:55.969566 systemd[1]: Starting modprobe@fuse.service... Oct 31 00:44:55.969577 systemd[1]: Starting modprobe@loop.service... Oct 31 00:44:55.969587 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 00:44:55.969598 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 31 00:44:55.969609 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 31 00:44:55.969618 kernel: fuse: init (API version 7.34) Oct 31 00:44:55.969629 systemd[1]: Starting systemd-journald.service... Oct 31 00:44:55.969642 systemd[1]: Starting systemd-modules-load.service... Oct 31 00:44:55.969656 kernel: loop: module loaded Oct 31 00:44:55.969665 systemd[1]: Starting systemd-network-generator.service... Oct 31 00:44:55.969676 systemd[1]: Starting systemd-remount-fs.service... Oct 31 00:44:55.969689 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 00:44:55.969699 systemd[1]: Mounted dev-hugepages.mount. Oct 31 00:44:55.969709 systemd[1]: Mounted dev-mqueue.mount. Oct 31 00:44:55.969719 systemd[1]: Mounted media.mount. Oct 31 00:44:55.969732 systemd-journald[1034]: Journal started Oct 31 00:44:55.969776 systemd-journald[1034]: Runtime Journal (/run/log/journal/a49e375400a340e4b6aa9e6ca4402a74) is 6.0M, max 48.7M, 42.6M free. Oct 31 00:44:55.892000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 31 00:44:55.892000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Oct 31 00:44:55.967000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 31 00:44:55.967000 audit[1034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe3fc6f40 a2=4000 a3=1 items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:44:55.967000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 31 00:44:55.970955 systemd[1]: Started systemd-journald.service. Oct 31 00:44:55.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.972816 systemd[1]: Mounted sys-kernel-debug.mount. Oct 31 00:44:55.973880 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 31 00:44:55.974952 systemd[1]: Mounted tmp.mount. Oct 31 00:44:55.975925 systemd[1]: Finished kmod-static-nodes.service. Oct 31 00:44:55.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.977052 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 00:44:55.977223 systemd[1]: Finished modprobe@configfs.service. Oct 31 00:44:55.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.978679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:44:55.978872 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:44:55.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.980134 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:44:55.980312 systemd[1]: Finished modprobe@drm.service. Oct 31 00:44:55.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.981450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:44:55.981694 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:44:55.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.982762 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 00:44:55.982974 systemd[1]: Finished modprobe@fuse.service. Oct 31 00:44:55.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.984005 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:44:55.985755 systemd[1]: Finished modprobe@loop.service. Oct 31 00:44:55.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.986991 systemd[1]: Finished systemd-modules-load.service. Oct 31 00:44:55.988546 systemd[1]: Finished systemd-network-generator.service. Oct 31 00:44:55.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.989843 systemd[1]: Finished systemd-remount-fs.service. Oct 31 00:44:55.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:55.991062 systemd[1]: Reached target network-pre.target. Oct 31 00:44:55.993739 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 31 00:44:55.995772 systemd[1]: Mounting sys-kernel-config.mount... Oct 31 00:44:55.996496 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 00:44:55.998392 systemd[1]: Starting systemd-hwdb-update.service... Oct 31 00:44:56.000795 systemd[1]: Starting systemd-journal-flush.service... Oct 31 00:44:56.001865 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:44:56.003072 systemd[1]: Starting systemd-random-seed.service... Oct 31 00:44:56.004250 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 00:44:56.008727 systemd[1]: Starting systemd-sysctl.service... Oct 31 00:44:56.011512 systemd-journald[1034]: Time spent on flushing to /var/log/journal/a49e375400a340e4b6aa9e6ca4402a74 is 12.177ms for 933 entries. Oct 31 00:44:56.011512 systemd-journald[1034]: System Journal (/var/log/journal/a49e375400a340e4b6aa9e6ca4402a74) is 8.0M, max 195.6M, 187.6M free. Oct 31 00:44:56.036820 systemd-journald[1034]: Received client request to flush runtime journal. Oct 31 00:44:56.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.013181 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 31 00:44:56.014224 systemd[1]: Mounted sys-kernel-config.mount. Oct 31 00:44:56.024961 systemd[1]: Finished systemd-random-seed.service. Oct 31 00:44:56.026248 systemd[1]: Finished systemd-sysctl.service. Oct 31 00:44:56.027200 systemd[1]: Reached target first-boot-complete.target. Oct 31 00:44:56.028602 systemd[1]: Finished flatcar-tmpfiles.service. Oct 31 00:44:56.031044 systemd[1]: Starting systemd-sysusers.service... Oct 31 00:44:56.037822 systemd[1]: Finished systemd-journal-flush.service. Oct 31 00:44:56.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.040356 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 00:44:56.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.042695 systemd[1]: Starting systemd-udev-settle.service... Oct 31 00:44:56.051092 udevadm[1083]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 31 00:44:56.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.054575 systemd[1]: Finished systemd-sysusers.service. Oct 31 00:44:56.056735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 00:44:56.076746 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 00:44:56.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.423085 systemd[1]: Finished systemd-hwdb-update.service. Oct 31 00:44:56.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.425209 systemd[1]: Starting systemd-udevd.service... Oct 31 00:44:56.441225 systemd-udevd[1089]: Using default interface naming scheme 'v252'. Oct 31 00:44:56.458817 systemd[1]: Started systemd-udevd.service. Oct 31 00:44:56.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.463160 systemd[1]: Starting systemd-networkd.service... Oct 31 00:44:56.467747 systemd[1]: Starting systemd-userdbd.service... Oct 31 00:44:56.483839 systemd[1]: Found device dev-ttyAMA0.device. Oct 31 00:44:56.520805 systemd[1]: Started systemd-userdbd.service. Oct 31 00:44:56.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.531523 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 00:44:56.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.560902 systemd[1]: Finished systemd-udev-settle.service. Oct 31 00:44:56.563181 systemd[1]: Starting lvm2-activation-early.service... Oct 31 00:44:56.567829 systemd-networkd[1103]: lo: Link UP Oct 31 00:44:56.567838 systemd-networkd[1103]: lo: Gained carrier Oct 31 00:44:56.568209 systemd-networkd[1103]: Enumeration completed Oct 31 00:44:56.568328 systemd[1]: Started systemd-networkd.service. Oct 31 00:44:56.568333 systemd-networkd[1103]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:44:56.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.570113 systemd-networkd[1103]: eth0: Link UP Oct 31 00:44:56.570120 systemd-networkd[1103]: eth0: Gained carrier Oct 31 00:44:56.574119 lvm[1123]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:44:56.595602 systemd-networkd[1103]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:44:56.604457 systemd[1]: Finished lvm2-activation-early.service. Oct 31 00:44:56.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.605465 systemd[1]: Reached target cryptsetup.target. Oct 31 00:44:56.607568 systemd[1]: Starting lvm2-activation.service... Oct 31 00:44:56.611432 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:44:56.647466 systemd[1]: Finished lvm2-activation.service. Oct 31 00:44:56.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.648361 systemd[1]: Reached target local-fs-pre.target. Oct 31 00:44:56.649232 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 00:44:56.649264 systemd[1]: Reached target local-fs.target. Oct 31 00:44:56.650036 systemd[1]: Reached target machines.target. Oct 31 00:44:56.652007 systemd[1]: Starting ldconfig.service... Oct 31 00:44:56.653069 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:44:56.653121 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:44:56.654120 systemd[1]: Starting systemd-boot-update.service... Oct 31 00:44:56.655977 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 31 00:44:56.659975 systemd[1]: Starting systemd-machine-id-commit.service... Oct 31 00:44:56.662029 systemd[1]: Starting systemd-sysext.service... Oct 31 00:44:56.665247 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1128 (bootctl) Oct 31 00:44:56.666373 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 31 00:44:56.672826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 31 00:44:56.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.675929 systemd[1]: Unmounting usr-share-oem.mount... Oct 31 00:44:56.680137 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 31 00:44:56.680396 systemd[1]: Unmounted usr-share-oem.mount. Oct 31 00:44:56.692429 kernel: loop0: detected capacity change from 0 to 207008 Oct 31 00:44:56.708175 systemd-fsck[1137]: fsck.fat 4.2 (2021-01-31) Oct 31 00:44:56.708175 systemd-fsck[1137]: /dev/vda1: 236 files, 117310/258078 clusters Oct 31 00:44:56.709598 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 31 00:44:56.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.760407 systemd[1]: Finished systemd-machine-id-commit.service. Oct 31 00:44:56.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.764441 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 00:44:56.785442 kernel: loop1: detected capacity change from 0 to 207008 Oct 31 00:44:56.792425 (sd-sysext)[1146]: Using extensions 'kubernetes'. Oct 31 00:44:56.792793 (sd-sysext)[1146]: Merged extensions into '/usr'. Oct 31 00:44:56.808210 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:44:56.809572 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:44:56.811653 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:44:56.814803 systemd[1]: Starting modprobe@loop.service... Oct 31 00:44:56.815740 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:44:56.815871 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:44:56.816682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:44:56.816849 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:44:56.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.818500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:44:56.818653 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:44:56.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.820160 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:44:56.820338 systemd[1]: Finished modprobe@loop.service. Oct 31 00:44:56.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.821619 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:44:56.821726 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 00:44:56.873307 ldconfig[1127]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 00:44:56.878742 systemd[1]: Finished ldconfig.service. Oct 31 00:44:56.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.965766 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 00:44:56.967715 systemd[1]: Mounting boot.mount... Oct 31 00:44:56.969606 systemd[1]: Mounting usr-share-oem.mount... Oct 31 00:44:56.976146 systemd[1]: Mounted boot.mount. Oct 31 00:44:56.977137 systemd[1]: Mounted usr-share-oem.mount. Oct 31 00:44:56.979075 systemd[1]: Finished systemd-sysext.service. Oct 31 00:44:56.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.981355 systemd[1]: Starting ensure-sysext.service... Oct 31 00:44:56.983275 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 31 00:44:56.986532 systemd[1]: Finished systemd-boot-update.service. Oct 31 00:44:56.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:56.989093 systemd[1]: Reloading. Oct 31 00:44:56.992634 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 31 00:44:56.993452 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 00:44:56.994824 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 00:44:57.027100 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2025-10-31T00:44:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 00:44:57.027524 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2025-10-31T00:44:57Z" level=info msg="torcx already run" Oct 31 00:44:57.093515 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 00:44:57.093540 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 00:44:57.110843 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:44:57.152274 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 31 00:44:57.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.156728 systemd[1]: Starting audit-rules.service... Oct 31 00:44:57.158890 systemd[1]: Starting clean-ca-certificates.service... Oct 31 00:44:57.160955 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 31 00:44:57.163590 systemd[1]: Starting systemd-resolved.service... Oct 31 00:44:57.165899 systemd[1]: Starting systemd-timesyncd.service... Oct 31 00:44:57.167912 systemd[1]: Starting systemd-update-utmp.service... Oct 31 00:44:57.169552 systemd[1]: Finished clean-ca-certificates.service. Oct 31 00:44:57.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.173025 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:44:57.173000 audit[1242]: SYSTEM_BOOT pid=1242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.177445 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.178882 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:44:57.180882 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:44:57.183560 systemd[1]: Starting modprobe@loop.service... Oct 31 00:44:57.184465 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.184600 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:44:57.184702 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:44:57.185680 systemd[1]: Finished systemd-update-utmp.service. Oct 31 00:44:57.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.187058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:44:57.187217 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:44:57.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.190058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:44:57.190207 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:44:57.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.191846 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:44:57.192011 systemd[1]: Finished modprobe@loop.service. Oct 31 00:44:57.193906 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:44:57.194009 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.195481 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.196770 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:44:57.198756 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:44:57.200884 systemd[1]: Starting modprobe@loop.service... Oct 31 00:44:57.201720 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.201852 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:44:57.201980 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:44:57.202969 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 31 00:44:57.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.204445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:44:57.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.204606 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:44:57.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.205923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:44:57.206105 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:44:57.207368 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:44:57.208772 systemd[1]: Starting systemd-update-done.service... Oct 31 00:44:57.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.210309 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:44:57.210504 systemd[1]: Finished modprobe@loop.service. Oct 31 00:44:57.213690 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.215036 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:44:57.216955 systemd[1]: Starting modprobe@drm.service... Oct 31 00:44:57.218999 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:44:57.220910 systemd[1]: Starting modprobe@loop.service... Oct 31 00:44:57.221759 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.221891 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:44:57.223189 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 31 00:44:57.224172 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:44:57.225596 systemd[1]: Finished systemd-update-done.service. Oct 31 00:44:57.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.230499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:44:57.230662 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:44:57.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:44:57.233087 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:44:57.233241 systemd[1]: Finished modprobe@drm.service. Oct 31 00:44:57.233734 augenrules[1271]: No rules Oct 31 00:44:57.232000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 31 00:44:57.232000 audit[1271]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffceddbf70 a2=420 a3=0 items=0 ppid=1230 pid=1271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:44:57.232000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 31 00:44:57.235398 systemd[1]: Finished audit-rules.service. Oct 31 00:44:57.239999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:44:57.240157 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:44:57.241546 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:44:57.241724 systemd[1]: Finished modprobe@loop.service. Oct 31 00:44:57.242960 systemd[1]: Started systemd-timesyncd.service. Oct 31 00:44:57.245768 systemd[1]: Finished ensure-sysext.service. Oct 31 00:44:57.245832 systemd-timesyncd[1241]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 00:44:57.245880 systemd-timesyncd[1241]: Initial clock synchronization to Fri 2025-10-31 00:44:57.626995 UTC. Oct 31 00:44:57.247632 systemd[1]: Reached target time-set.target. Oct 31 00:44:57.248451 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:44:57.248498 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.258466 systemd-resolved[1238]: Positive Trust Anchors: Oct 31 00:44:57.258761 systemd-resolved[1238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:44:57.258843 systemd-resolved[1238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 00:44:57.269569 systemd-resolved[1238]: Defaulting to hostname 'linux'. Oct 31 00:44:57.271229 systemd[1]: Started systemd-resolved.service. Oct 31 00:44:57.272146 systemd[1]: Reached target network.target. Oct 31 00:44:57.272958 systemd[1]: Reached target nss-lookup.target. Oct 31 00:44:57.273765 systemd[1]: Reached target sysinit.target. Oct 31 00:44:57.274589 systemd[1]: Started motdgen.path. Oct 31 00:44:57.275262 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 31 00:44:57.276530 systemd[1]: Started logrotate.timer. Oct 31 00:44:57.277291 systemd[1]: Started mdadm.timer. Oct 31 00:44:57.277985 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 31 00:44:57.278794 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 00:44:57.278825 systemd[1]: Reached target paths.target. Oct 31 00:44:57.279525 systemd[1]: Reached target timers.target. Oct 31 00:44:57.280578 systemd[1]: Listening on dbus.socket. Oct 31 00:44:57.282517 systemd[1]: Starting docker.socket... Oct 31 00:44:57.284248 systemd[1]: Listening on sshd.socket. Oct 31 00:44:57.285401 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:44:57.285766 systemd[1]: Listening on docker.socket. Oct 31 00:44:57.286522 systemd[1]: Reached target sockets.target. Oct 31 00:44:57.287243 systemd[1]: Reached target basic.target. Oct 31 00:44:57.288189 systemd[1]: System is tainted: cgroupsv1 Oct 31 00:44:57.288243 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.288264 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 00:44:57.289393 systemd[1]: Starting containerd.service... Oct 31 00:44:57.291343 systemd[1]: Starting dbus.service... Oct 31 00:44:57.293391 systemd[1]: Starting enable-oem-cloudinit.service... Oct 31 00:44:57.295407 systemd[1]: Starting extend-filesystems.service... Oct 31 00:44:57.296290 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 31 00:44:57.297509 systemd[1]: Starting motdgen.service... Oct 31 00:44:57.298780 jq[1292]: false Oct 31 00:44:57.299401 systemd[1]: Starting prepare-helm.service... Oct 31 00:44:57.302431 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 31 00:44:57.304433 systemd[1]: Starting sshd-keygen.service... Oct 31 00:44:57.307133 systemd[1]: Starting systemd-logind.service... Oct 31 00:44:57.307899 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:44:57.307974 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 00:44:57.309196 systemd[1]: Starting update-engine.service... Oct 31 00:44:57.311546 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 31 00:44:57.314148 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 00:44:57.314884 extend-filesystems[1293]: Found loop1 Oct 31 00:44:57.314884 extend-filesystems[1293]: Found vda Oct 31 00:44:57.314454 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 31 00:44:57.326587 jq[1308]: true Oct 31 00:44:57.326696 extend-filesystems[1293]: Found vda1 Oct 31 00:44:57.326696 extend-filesystems[1293]: Found vda2 Oct 31 00:44:57.326696 extend-filesystems[1293]: Found vda3 Oct 31 00:44:57.326696 extend-filesystems[1293]: Found usr Oct 31 00:44:57.326696 extend-filesystems[1293]: Found vda4 Oct 31 00:44:57.326696 extend-filesystems[1293]: Found vda6 Oct 31 00:44:57.326696 extend-filesystems[1293]: Found vda7 Oct 31 00:44:57.326696 extend-filesystems[1293]: Found vda9 Oct 31 00:44:57.326696 extend-filesystems[1293]: Checking size of /dev/vda9 Oct 31 00:44:57.315645 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 00:44:57.345480 tar[1310]: linux-arm64/LICENSE Oct 31 00:44:57.345480 tar[1310]: linux-arm64/helm Oct 31 00:44:57.315879 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 31 00:44:57.335999 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 00:44:57.347842 jq[1320]: true Oct 31 00:44:57.336245 systemd[1]: Finished motdgen.service. Oct 31 00:44:57.348795 dbus-daemon[1291]: [system] SELinux support is enabled Oct 31 00:44:57.348977 systemd[1]: Started dbus.service. Oct 31 00:44:57.364881 extend-filesystems[1293]: Resized partition /dev/vda9 Oct 31 00:44:57.365995 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 31 00:44:57.351484 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 00:44:57.366285 extend-filesystems[1340]: resize2fs 1.46.5 (30-Dec-2021) Oct 31 00:44:57.351506 systemd[1]: Reached target system-config.target. Oct 31 00:44:57.352392 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 00:44:57.352426 systemd[1]: Reached target user-config.target. Oct 31 00:44:57.382566 update_engine[1306]: I1031 00:44:57.382294 1306 main.cc:92] Flatcar Update Engine starting Oct 31 00:44:57.392493 update_engine[1306]: I1031 00:44:57.384953 1306 update_check_scheduler.cc:74] Next update check in 9m30s Oct 31 00:44:57.384930 systemd[1]: Started update-engine.service. Oct 31 00:44:57.387377 systemd[1]: Started locksmithd.service. Oct 31 00:44:57.391641 systemd-logind[1305]: Watching system buttons on /dev/input/event0 (Power Button) Oct 31 00:44:57.393831 systemd-logind[1305]: New seat seat0. Oct 31 00:44:57.397738 systemd[1]: Started systemd-logind.service. Oct 31 00:44:57.406455 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 31 00:44:57.422406 env[1321]: time="2025-10-31T00:44:57.422331320Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 31 00:44:57.423718 extend-filesystems[1340]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 00:44:57.423718 extend-filesystems[1340]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 00:44:57.423718 extend-filesystems[1340]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 31 00:44:57.428267 extend-filesystems[1293]: Resized filesystem in /dev/vda9 Oct 31 00:44:57.427835 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 00:44:57.430175 bash[1348]: Updated "/home/core/.ssh/authorized_keys" Oct 31 00:44:57.428090 systemd[1]: Finished extend-filesystems.service. Oct 31 00:44:57.429506 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 31 00:44:57.444525 env[1321]: time="2025-10-31T00:44:57.444472720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 00:44:57.444661 env[1321]: time="2025-10-31T00:44:57.444640320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:44:57.445832 env[1321]: time="2025-10-31T00:44:57.445796400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:44:57.445832 env[1321]: time="2025-10-31T00:44:57.445829520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:44:57.446176 env[1321]: time="2025-10-31T00:44:57.446150160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:44:57.446231 env[1321]: time="2025-10-31T00:44:57.446175800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 00:44:57.446231 env[1321]: time="2025-10-31T00:44:57.446190320Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 31 00:44:57.446231 env[1321]: time="2025-10-31T00:44:57.446200400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 00:44:57.446420 env[1321]: time="2025-10-31T00:44:57.446330040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:44:57.446606 env[1321]: time="2025-10-31T00:44:57.446586800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:44:57.446779 env[1321]: time="2025-10-31T00:44:57.446756760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:44:57.446779 env[1321]: time="2025-10-31T00:44:57.446777400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 00:44:57.446856 env[1321]: time="2025-10-31T00:44:57.446838160Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 31 00:44:57.446901 env[1321]: time="2025-10-31T00:44:57.446855160Z" level=info msg="metadata content store policy set" policy=shared Oct 31 00:44:57.450065 env[1321]: time="2025-10-31T00:44:57.450035440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 00:44:57.450065 env[1321]: time="2025-10-31T00:44:57.450069480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 00:44:57.450154 env[1321]: time="2025-10-31T00:44:57.450083080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 00:44:57.450154 env[1321]: time="2025-10-31T00:44:57.450113880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 00:44:57.450154 env[1321]: time="2025-10-31T00:44:57.450130400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 00:44:57.450236 env[1321]: time="2025-10-31T00:44:57.450143360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 00:44:57.450236 env[1321]: time="2025-10-31T00:44:57.450170960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 00:44:57.450767 env[1321]: time="2025-10-31T00:44:57.450742360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 00:44:57.450923 env[1321]: time="2025-10-31T00:44:57.450901400Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 31 00:44:57.450960 env[1321]: time="2025-10-31T00:44:57.450926800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 00:44:57.450960 env[1321]: time="2025-10-31T00:44:57.450942960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 00:44:57.450960 env[1321]: time="2025-10-31T00:44:57.450957360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 00:44:57.451197 env[1321]: time="2025-10-31T00:44:57.451176520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 00:44:57.451421 env[1321]: time="2025-10-31T00:44:57.451397480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 00:44:57.452085 env[1321]: time="2025-10-31T00:44:57.451911000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 00:44:57.452121 env[1321]: time="2025-10-31T00:44:57.452102720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452145 env[1321]: time="2025-10-31T00:44:57.452120840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 00:44:57.452290 env[1321]: time="2025-10-31T00:44:57.452266800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452322 env[1321]: time="2025-10-31T00:44:57.452292400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452322 env[1321]: time="2025-10-31T00:44:57.452305960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452322 env[1321]: time="2025-10-31T00:44:57.452317600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452377 env[1321]: time="2025-10-31T00:44:57.452330400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452377 env[1321]: time="2025-10-31T00:44:57.452343000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452377 env[1321]: time="2025-10-31T00:44:57.452354080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452463 env[1321]: time="2025-10-31T00:44:57.452381040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452463 env[1321]: time="2025-10-31T00:44:57.452395320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 00:44:57.452701 env[1321]: time="2025-10-31T00:44:57.452675320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452735 env[1321]: time="2025-10-31T00:44:57.452707360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452735 env[1321]: time="2025-10-31T00:44:57.452720640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.452773 env[1321]: time="2025-10-31T00:44:57.452732080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 00:44:57.452773 env[1321]: time="2025-10-31T00:44:57.452747680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 31 00:44:57.452820 env[1321]: time="2025-10-31T00:44:57.452759040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 00:44:57.452820 env[1321]: time="2025-10-31T00:44:57.452794400Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 31 00:44:57.452861 env[1321]: time="2025-10-31T00:44:57.452828640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 00:44:57.453323 env[1321]: time="2025-10-31T00:44:57.453257080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 00:44:57.453934 env[1321]: time="2025-10-31T00:44:57.453332800Z" level=info msg="Connect containerd service" Oct 31 00:44:57.453934 env[1321]: time="2025-10-31T00:44:57.453366160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 00:44:57.454217 env[1321]: time="2025-10-31T00:44:57.454188960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:44:57.454564 env[1321]: time="2025-10-31T00:44:57.454490600Z" level=info msg="Start subscribing containerd event" Oct 31 00:44:57.454564 env[1321]: time="2025-10-31T00:44:57.454549760Z" level=info msg="Start recovering state" Oct 31 00:44:57.454634 env[1321]: time="2025-10-31T00:44:57.454618800Z" level=info msg="Start event monitor" Oct 31 00:44:57.454662 env[1321]: time="2025-10-31T00:44:57.454620120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 00:44:57.454662 env[1321]: time="2025-10-31T00:44:57.454642280Z" level=info msg="Start snapshots syncer" Oct 31 00:44:57.454662 env[1321]: time="2025-10-31T00:44:57.454653200Z" level=info msg="Start cni network conf syncer for default" Oct 31 00:44:57.454662 env[1321]: time="2025-10-31T00:44:57.454660320Z" level=info msg="Start streaming server" Oct 31 00:44:57.454745 env[1321]: time="2025-10-31T00:44:57.454670600Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 00:44:57.454745 env[1321]: time="2025-10-31T00:44:57.454716400Z" level=info msg="containerd successfully booted in 0.046892s" Oct 31 00:44:57.454824 systemd[1]: Started containerd.service. Oct 31 00:44:57.464133 locksmithd[1349]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 00:44:57.763258 tar[1310]: linux-arm64/README.md Oct 31 00:44:57.767575 systemd[1]: Finished prepare-helm.service. Oct 31 00:44:58.541788 systemd-networkd[1103]: eth0: Gained IPv6LL Oct 31 00:44:58.543695 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 31 00:44:58.545135 systemd[1]: Reached target network-online.target. Oct 31 00:44:58.547952 systemd[1]: Starting kubelet.service... Oct 31 00:44:58.689277 sshd_keygen[1322]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 00:44:58.709526 systemd[1]: Finished sshd-keygen.service. Oct 31 00:44:58.712347 systemd[1]: Starting issuegen.service... Oct 31 00:44:58.717577 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 00:44:58.717840 systemd[1]: Finished issuegen.service. Oct 31 00:44:58.721951 systemd[1]: Starting systemd-user-sessions.service... Oct 31 00:44:58.730109 systemd[1]: Finished systemd-user-sessions.service. Oct 31 00:44:58.733935 systemd[1]: Started getty@tty1.service. Oct 31 00:44:58.736650 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 31 00:44:58.737943 systemd[1]: Reached target getty.target. Oct 31 00:44:59.239147 systemd[1]: Started kubelet.service. Oct 31 00:44:59.240619 systemd[1]: Reached target multi-user.target. Oct 31 00:44:59.242969 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 31 00:44:59.251288 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 31 00:44:59.251550 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 31 00:44:59.253685 systemd[1]: Startup finished in 5.018s (kernel) + 5.261s (userspace) = 10.280s. Oct 31 00:44:59.653115 kubelet[1392]: E1031 00:44:59.652996 1392 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:44:59.654563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:44:59.654730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:45:02.971805 systemd[1]: Created slice system-sshd.slice. Oct 31 00:45:02.973102 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:47694.service. Oct 31 00:45:03.013283 sshd[1402]: Accepted publickey for core from 10.0.0.1 port 47694 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:45:03.015481 sshd[1402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:45:03.024914 systemd-logind[1305]: New session 1 of user core. Oct 31 00:45:03.025806 systemd[1]: Created slice user-500.slice. Oct 31 00:45:03.026847 systemd[1]: Starting user-runtime-dir@500.service... Oct 31 00:45:03.036597 systemd[1]: Finished user-runtime-dir@500.service. Oct 31 00:45:03.037934 systemd[1]: Starting user@500.service... Oct 31 00:45:03.041145 (systemd)[1407]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:45:03.104013 systemd[1407]: Queued start job for default target default.target. Oct 31 00:45:03.104256 systemd[1407]: Reached target paths.target. Oct 31 00:45:03.104272 systemd[1407]: Reached target sockets.target. Oct 31 00:45:03.104282 systemd[1407]: Reached target timers.target. Oct 31 00:45:03.104292 systemd[1407]: Reached target basic.target. Oct 31 00:45:03.104337 systemd[1407]: Reached target default.target. Oct 31 00:45:03.104361 systemd[1407]: Startup finished in 57ms. Oct 31 00:45:03.104566 systemd[1]: Started user@500.service. Oct 31 00:45:03.105606 systemd[1]: Started session-1.scope. Oct 31 00:45:03.158846 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:47696.service. Oct 31 00:45:03.194537 sshd[1416]: Accepted publickey for core from 10.0.0.1 port 47696 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:45:03.195838 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:45:03.199817 systemd-logind[1305]: New session 2 of user core. Oct 31 00:45:03.200641 systemd[1]: Started session-2.scope. Oct 31 00:45:03.255143 sshd[1416]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:03.257588 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:47698.service. Oct 31 00:45:03.258104 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:47696.service: Deactivated successfully. Oct 31 00:45:03.258962 systemd-logind[1305]: Session 2 logged out. Waiting for processes to exit. Oct 31 00:45:03.259038 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 00:45:03.259954 systemd-logind[1305]: Removed session 2. Oct 31 00:45:03.292614 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 47698 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:45:03.294139 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:45:03.297983 systemd-logind[1305]: New session 3 of user core. Oct 31 00:45:03.298346 systemd[1]: Started session-3.scope. Oct 31 00:45:03.349508 sshd[1421]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:03.351870 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:47708.service. Oct 31 00:45:03.352373 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:47698.service: Deactivated successfully. Oct 31 00:45:03.353344 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 00:45:03.353345 systemd-logind[1305]: Session 3 logged out. Waiting for processes to exit. Oct 31 00:45:03.354503 systemd-logind[1305]: Removed session 3. Oct 31 00:45:03.387262 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 47708 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:45:03.388552 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:45:03.392394 systemd-logind[1305]: New session 4 of user core. Oct 31 00:45:03.392885 systemd[1]: Started session-4.scope. Oct 31 00:45:03.448830 sshd[1428]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:03.450953 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:47710.service. Oct 31 00:45:03.452062 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:47708.service: Deactivated successfully. Oct 31 00:45:03.453288 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 00:45:03.453711 systemd-logind[1305]: Session 4 logged out. Waiting for processes to exit. Oct 31 00:45:03.454652 systemd-logind[1305]: Removed session 4. Oct 31 00:45:03.486355 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 47710 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:45:03.487579 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:45:03.491692 systemd-logind[1305]: New session 5 of user core. Oct 31 00:45:03.492703 systemd[1]: Started session-5.scope. Oct 31 00:45:03.552196 sudo[1441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 00:45:03.552434 sudo[1441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 00:45:03.563668 dbus-daemon[1291]: avc: received setenforce notice (enforcing=1) Oct 31 00:45:03.566189 sudo[1441]: pam_unix(sudo:session): session closed for user root Oct 31 00:45:03.568249 sshd[1435]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:03.570757 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:47726.service. Oct 31 00:45:03.571307 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:47710.service: Deactivated successfully. Oct 31 00:45:03.572022 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 00:45:03.574878 systemd-logind[1305]: Session 5 logged out. Waiting for processes to exit. Oct 31 00:45:03.575800 systemd-logind[1305]: Removed session 5. Oct 31 00:45:03.610455 sshd[1443]: Accepted publickey for core from 10.0.0.1 port 47726 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:45:03.611831 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:45:03.615583 systemd-logind[1305]: New session 6 of user core. Oct 31 00:45:03.616461 systemd[1]: Started session-6.scope. Oct 31 00:45:03.676449 sudo[1450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 00:45:03.676679 sudo[1450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 00:45:03.679485 sudo[1450]: pam_unix(sudo:session): session closed for user root Oct 31 00:45:03.684076 sudo[1449]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 00:45:03.684568 sudo[1449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 00:45:03.693519 systemd[1]: Stopping audit-rules.service... Oct 31 00:45:03.694000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 00:45:03.698756 auditctl[1453]: No rules Oct 31 00:45:03.699012 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 00:45:03.699252 systemd[1]: Stopped audit-rules.service. Oct 31 00:45:03.700142 kernel: kauditd_printk_skb: 71 callbacks suppressed Oct 31 00:45:03.700185 kernel: audit: type=1305 audit(1761871503.694:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 00:45:03.700202 kernel: audit: type=1300 audit(1761871503.694:152): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe379e0b0 a2=420 a3=0 items=0 ppid=1 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:03.694000 audit[1453]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe379e0b0 a2=420 a3=0 items=0 ppid=1 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:03.700859 systemd[1]: Starting audit-rules.service... Oct 31 00:45:03.694000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 31 00:45:03.706727 kernel: audit: type=1327 audit(1761871503.694:152): proctitle=2F7362696E2F617564697463746C002D44 Oct 31 00:45:03.706790 kernel: audit: type=1131 audit(1761871503.697:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.718071 augenrules[1471]: No rules Oct 31 00:45:03.719065 systemd[1]: Finished audit-rules.service. Oct 31 00:45:03.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.719930 sudo[1449]: pam_unix(sudo:session): session closed for user root Oct 31 00:45:03.719000 audit[1449]: USER_END pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.724151 sshd[1443]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:03.724674 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:47742.service. Oct 31 00:45:03.725703 kernel: audit: type=1130 audit(1761871503.718:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.725758 kernel: audit: type=1106 audit(1761871503.719:155): pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.725775 kernel: audit: type=1104 audit(1761871503.719:156): pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.719000 audit[1449]: CRED_DISP pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.727253 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:47726.service: Deactivated successfully. Oct 31 00:45:03.728308 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 00:45:03.728642 systemd-logind[1305]: Session 6 logged out. Waiting for processes to exit. Oct 31 00:45:03.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.54:22-10.0.0.1:47742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.729379 systemd-logind[1305]: Removed session 6. Oct 31 00:45:03.731625 kernel: audit: type=1130 audit(1761871503.724:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.54:22-10.0.0.1:47742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.731700 kernel: audit: type=1106 audit(1761871503.725:158): pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:03.725000 audit[1443]: USER_END pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:03.725000 audit[1443]: CRED_DISP pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:03.738519 kernel: audit: type=1104 audit(1761871503.725:159): pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:03.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.54:22-10.0.0.1:47726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.763000 audit[1476]: USER_ACCT pid=1476 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:03.764204 sshd[1476]: Accepted publickey for core from 10.0.0.1 port 47742 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:45:03.764000 audit[1476]: CRED_ACQ pid=1476 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:03.765000 audit[1476]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd387ed20 a2=3 a3=1 items=0 ppid=1 pid=1476 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:03.765000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:45:03.765831 sshd[1476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:45:03.769493 systemd-logind[1305]: New session 7 of user core. Oct 31 00:45:03.770019 systemd[1]: Started session-7.scope. Oct 31 00:45:03.772000 audit[1476]: USER_START pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:03.774000 audit[1481]: CRED_ACQ pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:03.824563 sudo[1482]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 00:45:03.824789 sudo[1482]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 00:45:03.823000 audit[1482]: USER_ACCT pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.823000 audit[1482]: CRED_REFR pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.826000 audit[1482]: USER_START pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:03.865784 systemd[1]: Starting docker.service... Oct 31 00:45:03.921806 env[1493]: time="2025-10-31T00:45:03.921745762Z" level=info msg="Starting up" Oct 31 00:45:03.923557 env[1493]: time="2025-10-31T00:45:03.923528867Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 00:45:03.923557 env[1493]: time="2025-10-31T00:45:03.923553576Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 00:45:03.923673 env[1493]: time="2025-10-31T00:45:03.923574065Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 00:45:03.923673 env[1493]: time="2025-10-31T00:45:03.923585416Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 00:45:03.925977 env[1493]: time="2025-10-31T00:45:03.925948395Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 00:45:03.926066 env[1493]: time="2025-10-31T00:45:03.926051659Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 00:45:03.926144 env[1493]: time="2025-10-31T00:45:03.926128288Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 00:45:03.926199 env[1493]: time="2025-10-31T00:45:03.926187295Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 00:45:03.932224 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1427776518-merged.mount: Deactivated successfully. Oct 31 00:45:04.146487 env[1493]: time="2025-10-31T00:45:04.146382429Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 31 00:45:04.146487 env[1493]: time="2025-10-31T00:45:04.146412008Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 31 00:45:04.146701 env[1493]: time="2025-10-31T00:45:04.146661554Z" level=info msg="Loading containers: start." Oct 31 00:45:04.195000 audit[1527]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.195000 audit[1527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd686d3c0 a2=0 a3=1 items=0 ppid=1493 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.195000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Oct 31 00:45:04.197000 audit[1529]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.197000 audit[1529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffea1a9d20 a2=0 a3=1 items=0 ppid=1493 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.197000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Oct 31 00:45:04.199000 audit[1531]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.199000 audit[1531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffaec0da0 a2=0 a3=1 items=0 ppid=1493 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.199000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 00:45:04.200000 audit[1533]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.200000 audit[1533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff4511fd0 a2=0 a3=1 items=0 ppid=1493 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.200000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 00:45:04.203000 audit[1535]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.203000 audit[1535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc689c150 a2=0 a3=1 items=0 ppid=1493 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.203000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Oct 31 00:45:04.243000 audit[1540]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.243000 audit[1540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffefa8e590 a2=0 a3=1 items=0 ppid=1493 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.243000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Oct 31 00:45:04.250000 audit[1542]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.250000 audit[1542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffb91c490 a2=0 a3=1 items=0 ppid=1493 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.250000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Oct 31 00:45:04.251000 audit[1544]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.251000 audit[1544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc90e1660 a2=0 a3=1 items=0 ppid=1493 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.251000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Oct 31 00:45:04.253000 audit[1546]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.253000 audit[1546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffe8a5b6f0 a2=0 a3=1 items=0 ppid=1493 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.253000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 00:45:04.260000 audit[1550]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.260000 audit[1550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd67a7360 a2=0 a3=1 items=0 ppid=1493 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.260000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 00:45:04.271000 audit[1551]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.271000 audit[1551]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc5993470 a2=0 a3=1 items=0 ppid=1493 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.271000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 00:45:04.282467 kernel: Initializing XFRM netlink socket Oct 31 00:45:04.306215 env[1493]: time="2025-10-31T00:45:04.306164474Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 31 00:45:04.321000 audit[1559]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.321000 audit[1559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffe8a1f9c0 a2=0 a3=1 items=0 ppid=1493 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.321000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Oct 31 00:45:04.342000 audit[1562]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.342000 audit[1562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffce85ee20 a2=0 a3=1 items=0 ppid=1493 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.342000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Oct 31 00:45:04.345000 audit[1565]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.345000 audit[1565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd7e2b7c0 a2=0 a3=1 items=0 ppid=1493 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.345000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Oct 31 00:45:04.347000 audit[1567]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.347000 audit[1567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffcf338d0 a2=0 a3=1 items=0 ppid=1493 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.347000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Oct 31 00:45:04.349000 audit[1569]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.349000 audit[1569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffda299990 a2=0 a3=1 items=0 ppid=1493 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.349000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Oct 31 00:45:04.351000 audit[1571]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.351000 audit[1571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffd6ae19e0 a2=0 a3=1 items=0 ppid=1493 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.351000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Oct 31 00:45:04.352000 audit[1573]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.352000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffff7b14fe0 a2=0 a3=1 items=0 ppid=1493 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.352000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Oct 31 00:45:04.359000 audit[1576]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.359000 audit[1576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffd9743260 a2=0 a3=1 items=0 ppid=1493 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.359000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Oct 31 00:45:04.361000 audit[1578]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.361000 audit[1578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc20dd9a0 a2=0 a3=1 items=0 ppid=1493 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.361000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 00:45:04.363000 audit[1580]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.363000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffdd1491c0 a2=0 a3=1 items=0 ppid=1493 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.363000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 00:45:04.365000 audit[1582]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.365000 audit[1582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc7807e40 a2=0 a3=1 items=0 ppid=1493 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.365000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Oct 31 00:45:04.366803 systemd-networkd[1103]: docker0: Link UP Oct 31 00:45:04.376000 audit[1586]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.376000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffea45df30 a2=0 a3=1 items=0 ppid=1493 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.376000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 00:45:04.389000 audit[1587]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:04.389000 audit[1587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdb139360 a2=0 a3=1 items=0 ppid=1493 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:04.389000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 00:45:04.389917 env[1493]: time="2025-10-31T00:45:04.389868112Z" level=info msg="Loading containers: done." Oct 31 00:45:04.408722 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck400350961-merged.mount: Deactivated successfully. Oct 31 00:45:04.435634 env[1493]: time="2025-10-31T00:45:04.435584554Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 00:45:04.435800 env[1493]: time="2025-10-31T00:45:04.435781314Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 31 00:45:04.435903 env[1493]: time="2025-10-31T00:45:04.435882881Z" level=info msg="Daemon has completed initialization" Oct 31 00:45:04.450302 systemd[1]: Started docker.service. Oct 31 00:45:04.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:04.457854 env[1493]: time="2025-10-31T00:45:04.457730725Z" level=info msg="API listen on /run/docker.sock" Oct 31 00:45:05.263745 env[1321]: time="2025-10-31T00:45:05.263692670Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 31 00:45:05.930068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312208601.mount: Deactivated successfully. Oct 31 00:45:07.249783 env[1321]: time="2025-10-31T00:45:07.249736836Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:07.251607 env[1321]: time="2025-10-31T00:45:07.251573094Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:07.254325 env[1321]: time="2025-10-31T00:45:07.254284189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:07.256424 env[1321]: time="2025-10-31T00:45:07.256381008Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:07.257303 env[1321]: time="2025-10-31T00:45:07.257273292Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 31 00:45:07.257930 env[1321]: time="2025-10-31T00:45:07.257906760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 31 00:45:08.736823 env[1321]: time="2025-10-31T00:45:08.736775038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:08.738812 env[1321]: time="2025-10-31T00:45:08.738777350Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:08.740785 env[1321]: time="2025-10-31T00:45:08.740755604Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:08.742558 env[1321]: time="2025-10-31T00:45:08.742528435Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:08.744226 env[1321]: time="2025-10-31T00:45:08.744195840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 31 00:45:08.744686 env[1321]: time="2025-10-31T00:45:08.744661202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 31 00:45:09.905720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 00:45:09.905890 systemd[1]: Stopped kubelet.service. Oct 31 00:45:09.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:09.907417 systemd[1]: Starting kubelet.service... Oct 31 00:45:09.911005 kernel: kauditd_printk_skb: 84 callbacks suppressed Oct 31 00:45:09.912362 kernel: audit: type=1130 audit(1761871509.904:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:09.912410 kernel: audit: type=1131 audit(1761871509.904:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:09.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:10.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:10.005397 systemd[1]: Started kubelet.service. Oct 31 00:45:10.008448 kernel: audit: type=1130 audit(1761871510.004:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:10.094182 env[1321]: time="2025-10-31T00:45:10.093477187Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:10.096820 env[1321]: time="2025-10-31T00:45:10.096140549Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:10.098351 env[1321]: time="2025-10-31T00:45:10.098300282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:10.100931 env[1321]: time="2025-10-31T00:45:10.100893456Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:10.101926 env[1321]: time="2025-10-31T00:45:10.101888880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 31 00:45:10.102396 env[1321]: time="2025-10-31T00:45:10.102370781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 31 00:45:10.115527 kubelet[1633]: E1031 00:45:10.115478 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:45:10.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 00:45:10.118076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:45:10.118278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:45:10.121458 kernel: audit: type=1131 audit(1761871510.117:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 00:45:11.179209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454905236.mount: Deactivated successfully. Oct 31 00:45:11.760447 env[1321]: time="2025-10-31T00:45:11.760374165Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:11.761759 env[1321]: time="2025-10-31T00:45:11.761717474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:11.763585 env[1321]: time="2025-10-31T00:45:11.763551670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:11.765787 env[1321]: time="2025-10-31T00:45:11.765749253Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:11.766328 env[1321]: time="2025-10-31T00:45:11.766281930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 31 00:45:11.766774 env[1321]: time="2025-10-31T00:45:11.766750069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 31 00:45:12.674109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1520853119.mount: Deactivated successfully. Oct 31 00:45:13.560711 env[1321]: time="2025-10-31T00:45:13.560650993Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:13.562394 env[1321]: time="2025-10-31T00:45:13.562352223Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:13.564924 env[1321]: time="2025-10-31T00:45:13.564878282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:13.567301 env[1321]: time="2025-10-31T00:45:13.567246896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:13.568920 env[1321]: time="2025-10-31T00:45:13.568879326Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 31 00:45:13.569932 env[1321]: time="2025-10-31T00:45:13.569899967Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 00:45:14.102095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945814831.mount: Deactivated successfully. Oct 31 00:45:14.106968 env[1321]: time="2025-10-31T00:45:14.106851497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:14.109751 env[1321]: time="2025-10-31T00:45:14.109723848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:14.112166 env[1321]: time="2025-10-31T00:45:14.112139524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:14.114668 env[1321]: time="2025-10-31T00:45:14.114587621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:14.115354 env[1321]: time="2025-10-31T00:45:14.115327441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 31 00:45:14.116746 env[1321]: time="2025-10-31T00:45:14.116690547Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 31 00:45:14.754191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330363794.mount: Deactivated successfully. Oct 31 00:45:17.148300 env[1321]: time="2025-10-31T00:45:17.148240697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:17.149901 env[1321]: time="2025-10-31T00:45:17.149865760Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:17.152597 env[1321]: time="2025-10-31T00:45:17.152567305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:17.154743 env[1321]: time="2025-10-31T00:45:17.154710071Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:17.155615 env[1321]: time="2025-10-31T00:45:17.155583912Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 31 00:45:20.369295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 00:45:20.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:20.369488 systemd[1]: Stopped kubelet.service. Oct 31 00:45:20.370973 systemd[1]: Starting kubelet.service... Oct 31 00:45:20.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:20.374400 kernel: audit: type=1130 audit(1761871520.368:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:20.374493 kernel: audit: type=1131 audit(1761871520.368:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:20.466271 systemd[1]: Started kubelet.service. Oct 31 00:45:20.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:20.469444 kernel: audit: type=1130 audit(1761871520.466:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:20.506082 kubelet[1669]: E1031 00:45:20.506026 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:45:20.510955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:45:20.511104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:45:20.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 00:45:20.514452 kernel: audit: type=1131 audit(1761871520.510:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 00:45:22.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.152717 systemd[1]: Stopped kubelet.service. Oct 31 00:45:22.154783 systemd[1]: Starting kubelet.service... Oct 31 00:45:22.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.163216 kernel: audit: type=1130 audit(1761871522.152:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.163295 kernel: audit: type=1131 audit(1761871522.152:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.186797 systemd[1]: Reloading. Oct 31 00:45:22.234503 /usr/lib/systemd/system-generators/torcx-generator[1705]: time="2025-10-31T00:45:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 00:45:22.234857 /usr/lib/systemd/system-generators/torcx-generator[1705]: time="2025-10-31T00:45:22Z" level=info msg="torcx already run" Oct 31 00:45:22.343430 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 00:45:22.343451 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 00:45:22.361727 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:45:22.421353 systemd[1]: Started kubelet.service. Oct 31 00:45:22.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.425536 systemd[1]: Stopping kubelet.service... Oct 31 00:45:22.426433 kernel: audit: type=1130 audit(1761871522.420:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.426468 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:45:22.426721 systemd[1]: Stopped kubelet.service. Oct 31 00:45:22.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.428293 systemd[1]: Starting kubelet.service... Oct 31 00:45:22.430760 kernel: audit: type=1131 audit(1761871522.425:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.525271 systemd[1]: Started kubelet.service. Oct 31 00:45:22.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.531451 kernel: audit: type=1130 audit(1761871522.526:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:22.563341 kubelet[1768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:45:22.563341 kubelet[1768]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:45:22.563341 kubelet[1768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:45:22.563725 kubelet[1768]: I1031 00:45:22.563401 1768 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:45:23.876638 kubelet[1768]: I1031 00:45:23.876585 1768 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 00:45:23.876638 kubelet[1768]: I1031 00:45:23.876623 1768 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:45:23.877001 kubelet[1768]: I1031 00:45:23.876889 1768 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 00:45:23.898980 kubelet[1768]: E1031 00:45:23.898931 1768 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:45:23.900070 kubelet[1768]: I1031 00:45:23.900031 1768 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:45:23.908265 kubelet[1768]: E1031 00:45:23.908221 1768 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:45:23.908265 kubelet[1768]: I1031 00:45:23.908260 1768 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:45:23.911119 kubelet[1768]: I1031 00:45:23.911086 1768 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:45:23.912120 kubelet[1768]: I1031 00:45:23.912062 1768 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:45:23.912318 kubelet[1768]: I1031 00:45:23.912117 1768 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 00:45:23.912521 kubelet[1768]: I1031 00:45:23.912496 1768 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:45:23.912521 kubelet[1768]: I1031 00:45:23.912512 1768 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 00:45:23.912905 kubelet[1768]: I1031 00:45:23.912876 1768 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:45:23.915957 kubelet[1768]: I1031 00:45:23.915927 1768 kubelet.go:446] "Attempting to sync node with API server" Oct 31 00:45:23.915998 kubelet[1768]: I1031 00:45:23.915957 1768 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:45:23.915998 kubelet[1768]: I1031 00:45:23.915980 1768 kubelet.go:352] "Adding apiserver pod source" Oct 31 00:45:23.915998 kubelet[1768]: I1031 00:45:23.915991 1768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:45:23.927937 kubelet[1768]: I1031 00:45:23.927906 1768 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 00:45:23.928046 kubelet[1768]: W1031 00:45:23.927967 1768 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Oct 31 00:45:23.928046 kubelet[1768]: E1031 00:45:23.928020 1768 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:45:23.928689 kubelet[1768]: I1031 00:45:23.928672 1768 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 00:45:23.928819 kubelet[1768]: W1031 00:45:23.928806 1768 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 00:45:23.929086 kubelet[1768]: W1031 00:45:23.929046 1768 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Oct 31 00:45:23.929147 kubelet[1768]: E1031 00:45:23.929128 1768 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:45:23.929833 kubelet[1768]: I1031 00:45:23.929809 1768 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:45:23.929882 kubelet[1768]: I1031 00:45:23.929849 1768 server.go:1287] "Started kubelet" Oct 31 00:45:23.942170 kubelet[1768]: I1031 00:45:23.942125 1768 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:45:23.943317 kubelet[1768]: I1031 00:45:23.943298 1768 server.go:479] "Adding debug handlers to kubelet server" Oct 31 00:45:23.945000 audit[1768]: AVC avc: denied { mac_admin } for pid=1768 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:45:23.945985 kubelet[1768]: I1031 00:45:23.945896 1768 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 00:45:23.945985 kubelet[1768]: I1031 00:45:23.945930 1768 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 00:45:23.946047 kubelet[1768]: I1031 00:45:23.946000 1768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:45:23.947630 kubelet[1768]: E1031 00:45:23.947309 1768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736cd0f3b6a445 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:45:23.929826373 +0000 UTC m=+1.400988015,LastTimestamp:2025-10-31 00:45:23.929826373 +0000 UTC m=+1.400988015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:45:23.948020 kubelet[1768]: I1031 00:45:23.947994 1768 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:45:23.948200 kubelet[1768]: I1031 00:45:23.948131 1768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:45:23.948400 kubelet[1768]: I1031 00:45:23.948378 1768 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:45:23.945000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 00:45:23.945000 audit[1768]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cfc4e0 a1=40004d3c38 a2=4000cfc4b0 a3=25 items=0 ppid=1 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 00:45:23.945000 audit[1768]: AVC avc: denied { mac_admin } for pid=1768 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:45:23.945000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 00:45:23.945000 audit[1768]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cbece0 a1=40004d3c50 a2=4000cfc570 a3=25 items=0 ppid=1 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 00:45:23.949287 kubelet[1768]: E1031 00:45:23.949262 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:23.949397 kubelet[1768]: I1031 00:45:23.949385 1768 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:45:23.949460 kernel: audit: type=1400 audit(1761871523.945:207): avc: denied { mac_admin } for pid=1768 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:45:23.950478 kubelet[1768]: E1031 00:45:23.949858 1768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Oct 31 00:45:23.950580 kubelet[1768]: W1031 00:45:23.949853 1768 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Oct 31 00:45:23.950983 kubelet[1768]: I1031 00:45:23.949889 1768 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:45:23.950983 kubelet[1768]: I1031 00:45:23.950707 1768 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:45:23.951071 kubelet[1768]: I1031 00:45:23.950108 1768 factory.go:221] Registration of the systemd container factory successfully Oct 31 00:45:23.951071 kubelet[1768]: I1031 00:45:23.951059 1768 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:45:23.951639 kubelet[1768]: E1031 00:45:23.951608 1768 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:45:23.951796 kubelet[1768]: E1031 00:45:23.951776 1768 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:45:23.951931 kubelet[1768]: I1031 00:45:23.951913 1768 factory.go:221] Registration of the containerd container factory successfully Oct 31 00:45:23.956000 audit[1783]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:23.956000 audit[1783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9309b60 a2=0 a3=1 items=0 ppid=1768 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 00:45:23.957000 audit[1784]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:23.957000 audit[1784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5b78c60 a2=0 a3=1 items=0 ppid=1768 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.957000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 00:45:23.960000 audit[1787]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:23.960000 audit[1787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff86ef0a0 a2=0 a3=1 items=0 ppid=1768 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 00:45:23.962000 audit[1789]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:23.962000 audit[1789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff2207070 a2=0 a3=1 items=0 ppid=1768 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 00:45:23.969000 audit[1792]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:23.969000 audit[1792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd3278870 a2=0 a3=1 items=0 ppid=1768 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.969000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 31 00:45:23.970490 kubelet[1768]: I1031 00:45:23.970445 1768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 00:45:23.970000 audit[1795]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:23.970000 audit[1795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc2a7db70 a2=0 a3=1 items=0 ppid=1768 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.970000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 00:45:23.971000 audit[1796]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:23.971000 audit[1796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7927390 a2=0 a3=1 items=0 ppid=1768 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 00:45:23.971793 kubelet[1768]: I1031 00:45:23.971768 1768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 00:45:23.971854 kubelet[1768]: I1031 00:45:23.971799 1768 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 00:45:23.971854 kubelet[1768]: I1031 00:45:23.971822 1768 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:45:23.971854 kubelet[1768]: I1031 00:45:23.971831 1768 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 00:45:23.971923 kubelet[1768]: E1031 00:45:23.971878 1768 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:45:23.972371 kubelet[1768]: W1031 00:45:23.972346 1768 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Oct 31 00:45:23.972541 kubelet[1768]: E1031 00:45:23.972380 1768 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:45:23.973000 audit[1799]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:23.973000 audit[1799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd05d3510 a2=0 a3=1 items=0 ppid=1768 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 00:45:23.973721 kubelet[1768]: I1031 00:45:23.973699 1768 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:45:23.973721 kubelet[1768]: I1031 00:45:23.973719 1768 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:45:23.973804 kubelet[1768]: I1031 00:45:23.973738 1768 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:45:23.974000 audit[1798]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:23.974000 audit[1798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde230e10 a2=0 a3=1 items=0 ppid=1768 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.974000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 00:45:23.974000 audit[1800]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:23.974000 audit[1800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffffb0e4d10 a2=0 a3=1 items=0 ppid=1768 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 00:45:23.975000 audit[1801]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:23.975000 audit[1801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe59daae0 a2=0 a3=1 items=0 ppid=1768 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.975000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 00:45:23.975000 audit[1802]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:23.975000 audit[1802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdcd06e80 a2=0 a3=1 items=0 ppid=1768 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:23.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 00:45:24.050458 kubelet[1768]: E1031 00:45:24.050405 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:24.072259 kubelet[1768]: E1031 00:45:24.072228 1768 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 00:45:24.102656 kubelet[1768]: I1031 00:45:24.102609 1768 policy_none.go:49] "None policy: Start" Oct 31 00:45:24.102656 kubelet[1768]: I1031 00:45:24.102645 1768 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:45:24.102656 kubelet[1768]: I1031 00:45:24.102659 1768 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:45:24.111186 kubelet[1768]: I1031 00:45:24.111156 1768 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 00:45:24.109000 audit[1768]: AVC avc: denied { mac_admin } for pid=1768 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:45:24.109000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 00:45:24.109000 audit[1768]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e30660 a1=4000ad4180 a2=4000e30630 a3=25 items=0 ppid=1 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:24.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 00:45:24.111606 kubelet[1768]: I1031 00:45:24.111497 1768 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 00:45:24.111678 kubelet[1768]: I1031 00:45:24.111662 1768 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:45:24.111709 kubelet[1768]: I1031 00:45:24.111679 1768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:45:24.111940 kubelet[1768]: I1031 00:45:24.111924 1768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:45:24.114836 kubelet[1768]: E1031 00:45:24.114813 1768 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:45:24.114959 kubelet[1768]: E1031 00:45:24.114946 1768 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 00:45:24.151846 kubelet[1768]: E1031 00:45:24.151739 1768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Oct 31 00:45:24.213036 kubelet[1768]: I1031 00:45:24.213000 1768 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:45:24.213562 kubelet[1768]: E1031 00:45:24.213533 1768 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Oct 31 00:45:24.279913 kubelet[1768]: E1031 00:45:24.279886 1768 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:45:24.280320 kubelet[1768]: E1031 00:45:24.280304 1768 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:45:24.283047 kubelet[1768]: E1031 00:45:24.283027 1768 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:45:24.352609 kubelet[1768]: I1031 00:45:24.352575 1768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4205fccb7b07d9c151b34d87ed718b49-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4205fccb7b07d9c151b34d87ed718b49\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:45:24.352726 kubelet[1768]: I1031 00:45:24.352615 1768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4205fccb7b07d9c151b34d87ed718b49-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4205fccb7b07d9c151b34d87ed718b49\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:45:24.352726 kubelet[1768]: I1031 00:45:24.352638 1768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4205fccb7b07d9c151b34d87ed718b49-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4205fccb7b07d9c151b34d87ed718b49\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:45:24.352726 kubelet[1768]: I1031 00:45:24.352656 1768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:24.352726 kubelet[1768]: I1031 00:45:24.352673 1768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:24.352726 kubelet[1768]: I1031 00:45:24.352688 1768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:45:24.352837 kubelet[1768]: I1031 00:45:24.352702 1768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:24.352837 kubelet[1768]: I1031 00:45:24.352716 1768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:24.352837 kubelet[1768]: I1031 00:45:24.352732 1768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:24.415189 kubelet[1768]: I1031 00:45:24.415097 1768 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:45:24.415952 kubelet[1768]: E1031 00:45:24.415923 1768 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Oct 31 00:45:24.552857 kubelet[1768]: E1031 00:45:24.552810 1768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Oct 31 00:45:24.580362 kubelet[1768]: E1031 00:45:24.580321 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:24.580692 kubelet[1768]: E1031 00:45:24.580658 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:24.581374 env[1321]: time="2025-10-31T00:45:24.581318885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 31 00:45:24.581374 env[1321]: time="2025-10-31T00:45:24.581320568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 31 00:45:24.584037 kubelet[1768]: E1031 00:45:24.584002 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:24.584710 env[1321]: time="2025-10-31T00:45:24.584511965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4205fccb7b07d9c151b34d87ed718b49,Namespace:kube-system,Attempt:0,}" Oct 31 00:45:24.817354 kubelet[1768]: I1031 00:45:24.817249 1768 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:45:24.817665 kubelet[1768]: E1031 00:45:24.817625 1768 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Oct 31 00:45:24.944702 kubelet[1768]: W1031 00:45:24.944654 1768 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Oct 31 00:45:24.944702 kubelet[1768]: E1031 00:45:24.944703 1768 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:45:24.971838 kubelet[1768]: W1031 00:45:24.971767 1768 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Oct 31 00:45:24.971927 kubelet[1768]: E1031 00:45:24.971844 1768 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:45:25.041632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922034114.mount: Deactivated successfully. Oct 31 00:45:25.046280 env[1321]: time="2025-10-31T00:45:25.046224989Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.050085 env[1321]: time="2025-10-31T00:45:25.050021260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.051346 env[1321]: time="2025-10-31T00:45:25.051316215Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.052981 env[1321]: time="2025-10-31T00:45:25.052924535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.054730 env[1321]: time="2025-10-31T00:45:25.054699391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.055713 env[1321]: time="2025-10-31T00:45:25.055679098Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.058007 env[1321]: time="2025-10-31T00:45:25.057960689Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.060841 kubelet[1768]: W1031 00:45:25.060780 1768 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Oct 31 00:45:25.060934 kubelet[1768]: E1031 00:45:25.060849 1768 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:45:25.061565 env[1321]: time="2025-10-31T00:45:25.061368978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.063233 env[1321]: time="2025-10-31T00:45:25.063089403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.064774 env[1321]: time="2025-10-31T00:45:25.064738176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.066425 env[1321]: time="2025-10-31T00:45:25.066383744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.068672 env[1321]: time="2025-10-31T00:45:25.068112701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:25.080314 env[1321]: time="2025-10-31T00:45:25.080232618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:45:25.080314 env[1321]: time="2025-10-31T00:45:25.080279559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:45:25.080314 env[1321]: time="2025-10-31T00:45:25.080290613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:45:25.080673 env[1321]: time="2025-10-31T00:45:25.080567611Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9eabf72a6debc647619e79408110b0ef1e9d68b79230763b7b7154b7d634cfc1 pid=1811 runtime=io.containerd.runc.v2 Oct 31 00:45:25.099907 env[1321]: time="2025-10-31T00:45:25.099677890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:45:25.099907 env[1321]: time="2025-10-31T00:45:25.099718222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:45:25.099907 env[1321]: time="2025-10-31T00:45:25.099728796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:45:25.099907 env[1321]: time="2025-10-31T00:45:25.099848191Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f073fdc264a6faef2cf21a858869a59f5614f58a56e4419cba62502b234f2b pid=1835 runtime=io.containerd.runc.v2 Oct 31 00:45:25.111809 env[1321]: time="2025-10-31T00:45:25.111653901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:45:25.111809 env[1321]: time="2025-10-31T00:45:25.111719506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:45:25.111809 env[1321]: time="2025-10-31T00:45:25.111731001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:45:25.112256 env[1321]: time="2025-10-31T00:45:25.112166885Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a3423499ba218a52610239e83fa49a004ead064886a10c2eacb70797ab8f4dc pid=1866 runtime=io.containerd.runc.v2 Oct 31 00:45:25.157590 env[1321]: time="2025-10-31T00:45:25.157535689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4205fccb7b07d9c151b34d87ed718b49,Namespace:kube-system,Attempt:0,} returns sandbox id \"94f073fdc264a6faef2cf21a858869a59f5614f58a56e4419cba62502b234f2b\"" Oct 31 00:45:25.157837 env[1321]: time="2025-10-31T00:45:25.157801313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eabf72a6debc647619e79408110b0ef1e9d68b79230763b7b7154b7d634cfc1\"" Oct 31 00:45:25.159227 kubelet[1768]: E1031 00:45:25.159049 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:25.159227 kubelet[1768]: E1031 00:45:25.159072 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:25.161782 env[1321]: time="2025-10-31T00:45:25.161737524Z" level=info msg="CreateContainer within sandbox \"94f073fdc264a6faef2cf21a858869a59f5614f58a56e4419cba62502b234f2b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 00:45:25.162069 env[1321]: time="2025-10-31T00:45:25.162029982Z" level=info msg="CreateContainer within sandbox \"9eabf72a6debc647619e79408110b0ef1e9d68b79230763b7b7154b7d634cfc1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 00:45:25.175630 env[1321]: time="2025-10-31T00:45:25.175580951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a3423499ba218a52610239e83fa49a004ead064886a10c2eacb70797ab8f4dc\"" Oct 31 00:45:25.176579 kubelet[1768]: E1031 00:45:25.176407 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:25.179654 env[1321]: time="2025-10-31T00:45:25.179611164Z" level=info msg="CreateContainer within sandbox \"1a3423499ba218a52610239e83fa49a004ead064886a10c2eacb70797ab8f4dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 00:45:25.180805 env[1321]: time="2025-10-31T00:45:25.180760130Z" level=info msg="CreateContainer within sandbox \"94f073fdc264a6faef2cf21a858869a59f5614f58a56e4419cba62502b234f2b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a52d0e2a710df22a8ec6657eaee1be12b9c3bf221196de575e5b0a09fecec6d0\"" Oct 31 00:45:25.181552 env[1321]: time="2025-10-31T00:45:25.181518471Z" level=info msg="CreateContainer within sandbox \"9eabf72a6debc647619e79408110b0ef1e9d68b79230763b7b7154b7d634cfc1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0fc367091c184d0e2f5bd72392575e8a78aeb6e77604bf4e4225aef7a82c6d43\"" Oct 31 00:45:25.181918 env[1321]: time="2025-10-31T00:45:25.181881741Z" level=info msg="StartContainer for \"a52d0e2a710df22a8ec6657eaee1be12b9c3bf221196de575e5b0a09fecec6d0\"" Oct 31 00:45:25.182030 env[1321]: time="2025-10-31T00:45:25.182005060Z" level=info msg="StartContainer for \"0fc367091c184d0e2f5bd72392575e8a78aeb6e77604bf4e4225aef7a82c6d43\"" Oct 31 00:45:25.196948 env[1321]: time="2025-10-31T00:45:25.196885668Z" level=info msg="CreateContainer within sandbox \"1a3423499ba218a52610239e83fa49a004ead064886a10c2eacb70797ab8f4dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76c9746d68572860744543548a3effc4b7f624f32cfc3a29e5c2f3e3cfd428fd\"" Oct 31 00:45:25.198232 env[1321]: time="2025-10-31T00:45:25.198191157Z" level=info msg="StartContainer for \"76c9746d68572860744543548a3effc4b7f624f32cfc3a29e5c2f3e3cfd428fd\"" Oct 31 00:45:25.242435 env[1321]: time="2025-10-31T00:45:25.242373627Z" level=info msg="StartContainer for \"a52d0e2a710df22a8ec6657eaee1be12b9c3bf221196de575e5b0a09fecec6d0\" returns successfully" Oct 31 00:45:25.247575 env[1321]: time="2025-10-31T00:45:25.247530137Z" level=info msg="StartContainer for \"0fc367091c184d0e2f5bd72392575e8a78aeb6e77604bf4e4225aef7a82c6d43\" returns successfully" Oct 31 00:45:25.274544 kubelet[1768]: W1031 00:45:25.274424 1768 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Oct 31 00:45:25.274544 kubelet[1768]: E1031 00:45:25.274501 1768 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:45:25.295426 env[1321]: time="2025-10-31T00:45:25.294984479Z" level=info msg="StartContainer for \"76c9746d68572860744543548a3effc4b7f624f32cfc3a29e5c2f3e3cfd428fd\" returns successfully" Oct 31 00:45:25.353728 kubelet[1768]: E1031 00:45:25.353611 1768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Oct 31 00:45:25.619178 kubelet[1768]: I1031 00:45:25.619084 1768 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:45:25.978392 kubelet[1768]: E1031 00:45:25.978293 1768 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:45:25.978710 kubelet[1768]: E1031 00:45:25.978447 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:25.980274 kubelet[1768]: E1031 00:45:25.980246 1768 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:45:25.980372 kubelet[1768]: E1031 00:45:25.980355 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:25.981840 kubelet[1768]: E1031 00:45:25.981818 1768 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:45:25.981935 kubelet[1768]: E1031 00:45:25.981919 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:26.793461 kubelet[1768]: I1031 00:45:26.793400 1768 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:45:26.793461 kubelet[1768]: E1031 00:45:26.793450 1768 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 00:45:26.807072 kubelet[1768]: E1031 00:45:26.807024 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:26.908071 kubelet[1768]: E1031 00:45:26.908006 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:26.982945 kubelet[1768]: E1031 00:45:26.982895 1768 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:45:26.983246 kubelet[1768]: E1031 00:45:26.983018 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:26.983292 kubelet[1768]: E1031 00:45:26.983267 1768 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:45:26.983447 kubelet[1768]: E1031 00:45:26.983430 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:27.008242 kubelet[1768]: E1031 00:45:27.008209 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.109356 kubelet[1768]: E1031 00:45:27.109216 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.209953 kubelet[1768]: E1031 00:45:27.209909 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.310398 kubelet[1768]: E1031 00:45:27.310358 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.411220 kubelet[1768]: E1031 00:45:27.411098 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.512095 kubelet[1768]: E1031 00:45:27.512041 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.613076 kubelet[1768]: E1031 00:45:27.613017 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.713778 kubelet[1768]: E1031 00:45:27.713661 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.814371 kubelet[1768]: E1031 00:45:27.814332 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.914842 kubelet[1768]: E1031 00:45:27.914785 1768 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:45:27.950165 kubelet[1768]: I1031 00:45:27.950124 1768 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:45:27.959825 kubelet[1768]: I1031 00:45:27.959776 1768 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:27.964037 kubelet[1768]: I1031 00:45:27.963910 1768 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:45:28.509693 kubelet[1768]: I1031 00:45:28.509499 1768 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:45:28.517162 kubelet[1768]: E1031 00:45:28.517121 1768 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:45:28.517304 kubelet[1768]: E1031 00:45:28.517287 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:28.919000 kubelet[1768]: I1031 00:45:28.918849 1768 apiserver.go:52] "Watching apiserver" Oct 31 00:45:28.921364 kubelet[1768]: E1031 00:45:28.921339 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:28.921649 kubelet[1768]: E1031 00:45:28.921610 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:28.951110 kubelet[1768]: I1031 00:45:28.951060 1768 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 00:45:28.976176 systemd[1]: Reloading. Oct 31 00:45:28.985322 kubelet[1768]: E1031 00:45:28.985290 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:29.020488 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2025-10-31T00:45:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 00:45:29.020519 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2025-10-31T00:45:29Z" level=info msg="torcx already run" Oct 31 00:45:29.096872 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 00:45:29.096896 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 00:45:29.115231 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:45:29.182962 systemd[1]: Stopping kubelet.service... Oct 31 00:45:29.203816 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:45:29.204127 systemd[1]: Stopped kubelet.service. Oct 31 00:45:29.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:29.204859 kernel: kauditd_printk_skb: 47 callbacks suppressed Oct 31 00:45:29.204894 kernel: audit: type=1131 audit(1761871529.203:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:29.207001 systemd[1]: Starting kubelet.service... Oct 31 00:45:29.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:29.301620 systemd[1]: Started kubelet.service. Oct 31 00:45:29.305482 kernel: audit: type=1130 audit(1761871529.301:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:29.346044 kubelet[2117]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:45:29.346044 kubelet[2117]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:45:29.346044 kubelet[2117]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:45:29.346454 kubelet[2117]: I1031 00:45:29.346116 2117 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:45:29.352381 kubelet[2117]: I1031 00:45:29.352339 2117 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 00:45:29.352580 kubelet[2117]: I1031 00:45:29.352567 2117 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:45:29.352952 kubelet[2117]: I1031 00:45:29.352931 2117 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 00:45:29.354296 kubelet[2117]: I1031 00:45:29.354267 2117 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 31 00:45:29.357811 kubelet[2117]: I1031 00:45:29.357764 2117 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:45:29.364953 kubelet[2117]: E1031 00:45:29.363938 2117 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:45:29.364953 kubelet[2117]: I1031 00:45:29.363975 2117 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:45:29.370140 kubelet[2117]: I1031 00:45:29.370107 2117 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:45:29.370713 kubelet[2117]: I1031 00:45:29.370680 2117 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:45:29.370916 kubelet[2117]: I1031 00:45:29.370715 2117 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 00:45:29.370995 kubelet[2117]: I1031 00:45:29.370926 2117 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:45:29.370995 kubelet[2117]: I1031 00:45:29.370935 2117 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 00:45:29.370995 kubelet[2117]: I1031 00:45:29.370980 2117 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:45:29.371114 kubelet[2117]: I1031 00:45:29.371103 2117 kubelet.go:446] "Attempting to sync node with API server" Oct 31 00:45:29.371146 kubelet[2117]: I1031 00:45:29.371119 2117 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:45:29.371146 kubelet[2117]: I1031 00:45:29.371141 2117 kubelet.go:352] "Adding apiserver pod source" Oct 31 00:45:29.371200 kubelet[2117]: I1031 00:45:29.371150 2117 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:45:29.372314 kubelet[2117]: I1031 00:45:29.372280 2117 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 00:45:29.378902 kubelet[2117]: I1031 00:45:29.378864 2117 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 00:45:29.379707 kubelet[2117]: I1031 00:45:29.379688 2117 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:45:29.379765 kubelet[2117]: I1031 00:45:29.379751 2117 server.go:1287] "Started kubelet" Oct 31 00:45:29.380000 audit[2117]: AVC avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:45:29.383400 kubelet[2117]: I1031 00:45:29.383232 2117 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:45:29.383860 kubelet[2117]: I1031 00:45:29.383838 2117 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:45:29.384020 kubelet[2117]: I1031 00:45:29.383996 2117 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:45:29.392217 kernel: audit: type=1400 audit(1761871529.380:224): avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:45:29.392338 kernel: audit: type=1401 audit(1761871529.380:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 00:45:29.380000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 00:45:29.380000 audit[2117]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a1fbc0 a1=4000a2cac8 a2=4000a1fb90 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:29.397041 kubelet[2117]: I1031 00:45:29.396996 2117 server.go:479] "Adding debug handlers to kubelet server" Oct 31 00:45:29.397274 kernel: audit: type=1300 audit(1761871529.380:224): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a1fbc0 a1=4000a2cac8 a2=4000a1fb90 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:29.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 00:45:29.399865 kubelet[2117]: I1031 00:45:29.399825 2117 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 00:45:29.400037 kubelet[2117]: I1031 00:45:29.400020 2117 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 00:45:29.400146 kubelet[2117]: I1031 00:45:29.400122 2117 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:45:29.400501 kubelet[2117]: I1031 00:45:29.400476 2117 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:45:29.400907 kubelet[2117]: I1031 00:45:29.400885 2117 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:45:29.401533 kubelet[2117]: I1031 00:45:29.401500 2117 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:45:29.402043 kernel: audit: type=1327 audit(1761871529.380:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 00:45:29.398000 audit[2117]: AVC avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:45:29.402874 kubelet[2117]: I1031 00:45:29.402858 2117 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:45:29.405732 kubelet[2117]: I1031 00:45:29.405707 2117 factory.go:221] Registration of the systemd container factory successfully Oct 31 00:45:29.406047 kubelet[2117]: E1031 00:45:29.406008 2117 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:45:29.406136 kubelet[2117]: I1031 00:45:29.406033 2117 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:45:29.408467 kernel: audit: type=1400 audit(1761871529.398:225): avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:45:29.398000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 00:45:29.409495 kubelet[2117]: I1031 00:45:29.409456 2117 factory.go:221] Registration of the containerd container factory successfully Oct 31 00:45:29.410500 kernel: audit: type=1401 audit(1761871529.398:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 00:45:29.398000 audit[2117]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bd42c0 a1=4000589bc0 a2=400096b980 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:29.415684 kernel: audit: type=1300 audit(1761871529.398:225): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bd42c0 a1=4000589bc0 a2=400096b980 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:29.415750 kernel: audit: type=1327 audit(1761871529.398:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 00:45:29.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 00:45:29.419008 kubelet[2117]: I1031 00:45:29.418962 2117 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 00:45:29.420840 kubelet[2117]: I1031 00:45:29.420813 2117 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 00:45:29.420950 kubelet[2117]: I1031 00:45:29.420938 2117 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 00:45:29.421021 kubelet[2117]: I1031 00:45:29.421009 2117 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:45:29.421072 kubelet[2117]: I1031 00:45:29.421063 2117 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 00:45:29.421170 kubelet[2117]: E1031 00:45:29.421151 2117 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:45:29.460884 kubelet[2117]: I1031 00:45:29.460784 2117 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:45:29.460884 kubelet[2117]: I1031 00:45:29.460804 2117 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:45:29.460884 kubelet[2117]: I1031 00:45:29.460827 2117 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:45:29.461692 kubelet[2117]: I1031 00:45:29.461666 2117 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 00:45:29.461795 kubelet[2117]: I1031 00:45:29.461769 2117 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 00:45:29.461848 kubelet[2117]: I1031 00:45:29.461839 2117 policy_none.go:49] "None policy: Start" Oct 31 00:45:29.461903 kubelet[2117]: I1031 00:45:29.461894 2117 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:45:29.461957 kubelet[2117]: I1031 00:45:29.461949 2117 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:45:29.462147 kubelet[2117]: I1031 00:45:29.462133 2117 state_mem.go:75] "Updated machine memory state" Oct 31 00:45:29.463579 kubelet[2117]: I1031 00:45:29.463552 2117 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 00:45:29.462000 audit[2117]: AVC avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:45:29.462000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 00:45:29.462000 audit[2117]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001053830 a1=4001051530 a2=4001053800 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:29.462000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 00:45:29.463968 kubelet[2117]: I1031 00:45:29.463949 2117 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 00:45:29.464184 kubelet[2117]: I1031 00:45:29.464164 2117 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:45:29.464309 kubelet[2117]: I1031 00:45:29.464256 2117 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:45:29.465255 kubelet[2117]: I1031 00:45:29.465222 2117 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:45:29.465665 kubelet[2117]: E1031 00:45:29.465627 2117 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:45:29.522846 kubelet[2117]: I1031 00:45:29.522795 2117 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:45:29.522988 kubelet[2117]: I1031 00:45:29.522795 2117 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:29.523040 kubelet[2117]: I1031 00:45:29.522823 2117 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:45:29.529351 kubelet[2117]: E1031 00:45:29.529299 2117 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 00:45:29.529510 kubelet[2117]: E1031 00:45:29.529397 2117 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:29.529585 kubelet[2117]: E1031 00:45:29.529555 2117 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:45:29.572680 kubelet[2117]: I1031 00:45:29.572655 2117 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:45:29.580134 kubelet[2117]: I1031 00:45:29.580096 2117 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 00:45:29.580336 kubelet[2117]: I1031 00:45:29.580190 2117 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:45:29.605102 kubelet[2117]: I1031 00:45:29.605053 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4205fccb7b07d9c151b34d87ed718b49-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4205fccb7b07d9c151b34d87ed718b49\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:45:29.605271 kubelet[2117]: I1031 00:45:29.605133 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:29.605271 kubelet[2117]: I1031 00:45:29.605245 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:29.605350 kubelet[2117]: I1031 00:45:29.605272 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4205fccb7b07d9c151b34d87ed718b49-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4205fccb7b07d9c151b34d87ed718b49\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:45:29.605350 kubelet[2117]: I1031 00:45:29.605328 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4205fccb7b07d9c151b34d87ed718b49-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4205fccb7b07d9c151b34d87ed718b49\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:45:29.605350 kubelet[2117]: I1031 00:45:29.605347 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:29.605446 kubelet[2117]: I1031 00:45:29.605364 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:29.605446 kubelet[2117]: I1031 00:45:29.605382 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:29.605446 kubelet[2117]: I1031 00:45:29.605420 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:45:29.830581 kubelet[2117]: E1031 00:45:29.830473 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:29.830907 kubelet[2117]: E1031 00:45:29.830885 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:29.831062 kubelet[2117]: E1031 00:45:29.831029 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:30.372628 kubelet[2117]: I1031 00:45:30.372588 2117 apiserver.go:52] "Watching apiserver" Oct 31 00:45:30.401598 kubelet[2117]: I1031 00:45:30.401563 2117 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 00:45:30.434047 kubelet[2117]: E1031 00:45:30.434017 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:30.434227 kubelet[2117]: I1031 00:45:30.434150 2117 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:30.434503 kubelet[2117]: I1031 00:45:30.434252 2117 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:45:30.439645 kubelet[2117]: E1031 00:45:30.439618 2117 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:45:30.439892 kubelet[2117]: E1031 00:45:30.439874 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:30.440086 kubelet[2117]: E1031 00:45:30.440067 2117 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:45:30.440277 kubelet[2117]: E1031 00:45:30.440260 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:30.474623 kubelet[2117]: I1031 00:45:30.474553 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.474537548 podStartE2EDuration="3.474537548s" podCreationTimestamp="2025-10-31 00:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:45:30.467941944 +0000 UTC m=+1.161029701" watchObservedRunningTime="2025-10-31 00:45:30.474537548 +0000 UTC m=+1.167625305" Oct 31 00:45:30.482565 kubelet[2117]: I1031 00:45:30.482508 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.48249422 podStartE2EDuration="3.48249422s" podCreationTimestamp="2025-10-31 00:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:45:30.482186457 +0000 UTC m=+1.175274214" watchObservedRunningTime="2025-10-31 00:45:30.48249422 +0000 UTC m=+1.175581977" Oct 31 00:45:30.482698 kubelet[2117]: I1031 00:45:30.482605 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.482595126 podStartE2EDuration="3.482595126s" podCreationTimestamp="2025-10-31 00:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:45:30.474827292 +0000 UTC m=+1.167915009" watchObservedRunningTime="2025-10-31 00:45:30.482595126 +0000 UTC m=+1.175682883" Oct 31 00:45:31.436362 kubelet[2117]: E1031 00:45:31.435144 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:31.436362 kubelet[2117]: E1031 00:45:31.435777 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:31.436362 kubelet[2117]: E1031 00:45:31.436019 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:32.437137 kubelet[2117]: E1031 00:45:32.437092 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:35.514311 kubelet[2117]: I1031 00:45:35.514269 2117 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 00:45:35.514696 env[1321]: time="2025-10-31T00:45:35.514650227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 00:45:35.515122 kubelet[2117]: I1031 00:45:35.514966 2117 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 00:45:36.074692 kubelet[2117]: E1031 00:45:36.074655 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:36.443932 kubelet[2117]: E1031 00:45:36.443805 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:36.452217 kubelet[2117]: I1031 00:45:36.452045 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18352dec-34af-453a-95ce-043a13804416-xtables-lock\") pod \"kube-proxy-kblzt\" (UID: \"18352dec-34af-453a-95ce-043a13804416\") " pod="kube-system/kube-proxy-kblzt" Oct 31 00:45:36.452217 kubelet[2117]: I1031 00:45:36.452090 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18352dec-34af-453a-95ce-043a13804416-lib-modules\") pod \"kube-proxy-kblzt\" (UID: \"18352dec-34af-453a-95ce-043a13804416\") " pod="kube-system/kube-proxy-kblzt" Oct 31 00:45:36.452217 kubelet[2117]: I1031 00:45:36.452108 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/18352dec-34af-453a-95ce-043a13804416-kube-proxy\") pod \"kube-proxy-kblzt\" (UID: \"18352dec-34af-453a-95ce-043a13804416\") " pod="kube-system/kube-proxy-kblzt" Oct 31 00:45:36.452217 kubelet[2117]: I1031 00:45:36.452128 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7676s\" (UniqueName: \"kubernetes.io/projected/18352dec-34af-453a-95ce-043a13804416-kube-api-access-7676s\") pod \"kube-proxy-kblzt\" (UID: \"18352dec-34af-453a-95ce-043a13804416\") " pod="kube-system/kube-proxy-kblzt" Oct 31 00:45:36.562986 kubelet[2117]: I1031 00:45:36.562932 2117 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 31 00:45:36.652488 kubelet[2117]: I1031 00:45:36.652406 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a3a5dc1-a9dd-49dc-9f61-4c96b9932ee5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-wmmm2\" (UID: \"7a3a5dc1-a9dd-49dc-9f61-4c96b9932ee5\") " pod="tigera-operator/tigera-operator-7dcd859c48-wmmm2" Oct 31 00:45:36.652488 kubelet[2117]: I1031 00:45:36.652474 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tg9b\" (UniqueName: \"kubernetes.io/projected/7a3a5dc1-a9dd-49dc-9f61-4c96b9932ee5-kube-api-access-5tg9b\") pod \"tigera-operator-7dcd859c48-wmmm2\" (UID: \"7a3a5dc1-a9dd-49dc-9f61-4c96b9932ee5\") " pod="tigera-operator/tigera-operator-7dcd859c48-wmmm2" Oct 31 00:45:36.694850 kubelet[2117]: E1031 00:45:36.693693 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:36.694980 env[1321]: time="2025-10-31T00:45:36.694360349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kblzt,Uid:18352dec-34af-453a-95ce-043a13804416,Namespace:kube-system,Attempt:0,}" Oct 31 00:45:36.715381 env[1321]: time="2025-10-31T00:45:36.715267277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:45:36.715381 env[1321]: time="2025-10-31T00:45:36.715311270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:45:36.715381 env[1321]: time="2025-10-31T00:45:36.715328763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:45:36.715778 env[1321]: time="2025-10-31T00:45:36.715670499Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2758d6befbcda6aaca3bace68c6f97a1941c9dc66c9001bd273827eb6243419 pid=2176 runtime=io.containerd.runc.v2 Oct 31 00:45:36.795862 env[1321]: time="2025-10-31T00:45:36.795816118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kblzt,Uid:18352dec-34af-453a-95ce-043a13804416,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2758d6befbcda6aaca3bace68c6f97a1941c9dc66c9001bd273827eb6243419\"" Oct 31 00:45:36.796682 kubelet[2117]: E1031 00:45:36.796624 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:36.798966 env[1321]: time="2025-10-31T00:45:36.798926491Z" level=info msg="CreateContainer within sandbox \"f2758d6befbcda6aaca3bace68c6f97a1941c9dc66c9001bd273827eb6243419\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 00:45:36.812738 env[1321]: time="2025-10-31T00:45:36.812686537Z" level=info msg="CreateContainer within sandbox \"f2758d6befbcda6aaca3bace68c6f97a1941c9dc66c9001bd273827eb6243419\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"002076af2d7fc03ffbfb7521e8b69cacd1ea5f6495dd0d9f6e92190eda50df3c\"" Oct 31 00:45:36.815040 env[1321]: time="2025-10-31T00:45:36.814276570Z" level=info msg="StartContainer for \"002076af2d7fc03ffbfb7521e8b69cacd1ea5f6495dd0d9f6e92190eda50df3c\"" Oct 31 00:45:36.856797 env[1321]: time="2025-10-31T00:45:36.856744956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wmmm2,Uid:7a3a5dc1-a9dd-49dc-9f61-4c96b9932ee5,Namespace:tigera-operator,Attempt:0,}" Oct 31 00:45:36.866941 env[1321]: time="2025-10-31T00:45:36.866624289Z" level=info msg="StartContainer for \"002076af2d7fc03ffbfb7521e8b69cacd1ea5f6495dd0d9f6e92190eda50df3c\" returns successfully" Oct 31 00:45:36.879138 env[1321]: time="2025-10-31T00:45:36.878720646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:45:36.879138 env[1321]: time="2025-10-31T00:45:36.878967671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:45:36.879138 env[1321]: time="2025-10-31T00:45:36.878978600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:45:36.879287 env[1321]: time="2025-10-31T00:45:36.879205250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/649b77675148e09feacebdf6d7481afd93ed874b15c5035464c3f85a52e931a0 pid=2252 runtime=io.containerd.runc.v2 Oct 31 00:45:36.931625 env[1321]: time="2025-10-31T00:45:36.931580990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wmmm2,Uid:7a3a5dc1-a9dd-49dc-9f61-4c96b9932ee5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"649b77675148e09feacebdf6d7481afd93ed874b15c5035464c3f85a52e931a0\"" Oct 31 00:45:36.933890 env[1321]: time="2025-10-31T00:45:36.933313691Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 00:45:37.010000 audit[2319]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.013695 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 31 00:45:37.013782 kernel: audit: type=1325 audit(1761871537.010:227): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.013804 kernel: audit: type=1300 audit(1761871537.010:227): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0916ac0 a2=0 a3=1 items=0 ppid=2228 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.010000 audit[2319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0916ac0 a2=0 a3=1 items=0 ppid=2228 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 00:45:37.019900 kernel: audit: type=1327 audit(1761871537.010:227): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 00:45:37.019968 kernel: audit: type=1325 audit(1761871537.018:228): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.018000 audit[2320]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.018000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd26c4b30 a2=0 a3=1 items=0 ppid=2228 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.025612 kernel: audit: type=1300 audit(1761871537.018:228): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd26c4b30 a2=0 a3=1 items=0 ppid=2228 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.025654 kernel: audit: type=1327 audit(1761871537.018:228): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 00:45:37.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 00:45:37.019000 audit[2321]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.029361 kernel: audit: type=1325 audit(1761871537.019:229): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.029404 kernel: audit: type=1300 audit(1761871537.019:229): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdec12930 a2=0 a3=1 items=0 ppid=2228 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.019000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdec12930 a2=0 a3=1 items=0 ppid=2228 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.033022 kernel: audit: type=1327 audit(1761871537.019:229): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 00:45:37.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 00:45:37.034751 kernel: audit: type=1325 audit(1761871537.020:230): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.020000 audit[2322]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.020000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffddd10650 a2=0 a3=1 items=0 ppid=2228 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 00:45:37.022000 audit[2323]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.022000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe58fe7b0 a2=0 a3=1 items=0 ppid=2228 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 00:45:37.023000 audit[2324]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.023000 audit[2324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc7483280 a2=0 a3=1 items=0 ppid=2228 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 00:45:37.112000 audit[2325]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.112000 audit[2325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc6feff40 a2=0 a3=1 items=0 ppid=2228 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 00:45:37.115000 audit[2327]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.115000 audit[2327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffecd2fb0 a2=0 a3=1 items=0 ppid=2228 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 31 00:45:37.118000 audit[2330]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.118000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffebfb9040 a2=0 a3=1 items=0 ppid=2228 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 31 00:45:37.120000 audit[2331]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.120000 audit[2331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8cd5830 a2=0 a3=1 items=0 ppid=2228 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 00:45:37.122000 audit[2333]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.122000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffecc5c360 a2=0 a3=1 items=0 ppid=2228 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.122000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 00:45:37.123000 audit[2334]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.123000 audit[2334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc80c4440 a2=0 a3=1 items=0 ppid=2228 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 00:45:37.125000 audit[2336]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.125000 audit[2336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdb8749d0 a2=0 a3=1 items=0 ppid=2228 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 00:45:37.128000 audit[2339]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.128000 audit[2339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc8a90220 a2=0 a3=1 items=0 ppid=2228 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 31 00:45:37.129000 audit[2340]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.129000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4296fa0 a2=0 a3=1 items=0 ppid=2228 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 00:45:37.132000 audit[2342]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.132000 audit[2342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffef0e6710 a2=0 a3=1 items=0 ppid=2228 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 00:45:37.133000 audit[2343]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.133000 audit[2343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe23e2950 a2=0 a3=1 items=0 ppid=2228 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 00:45:37.135000 audit[2345]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.135000 audit[2345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc7f436d0 a2=0 a3=1 items=0 ppid=2228 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.135000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 00:45:37.138000 audit[2348]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.138000 audit[2348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec91bee0 a2=0 a3=1 items=0 ppid=2228 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 00:45:37.141000 audit[2351]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.141000 audit[2351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebc5ce40 a2=0 a3=1 items=0 ppid=2228 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 00:45:37.142000 audit[2352]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.142000 audit[2352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe8691160 a2=0 a3=1 items=0 ppid=2228 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.142000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 00:45:37.145000 audit[2354]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.145000 audit[2354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc2655c30 a2=0 a3=1 items=0 ppid=2228 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 00:45:37.148000 audit[2357]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.148000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc308270 a2=0 a3=1 items=0 ppid=2228 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 00:45:37.149000 audit[2358]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.149000 audit[2358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3d01c50 a2=0 a3=1 items=0 ppid=2228 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 00:45:37.152000 audit[2360]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 00:45:37.152000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffc585a0d0 a2=0 a3=1 items=0 ppid=2228 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 00:45:37.173000 audit[2366]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:37.173000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdea22030 a2=0 a3=1 items=0 ppid=2228 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:37.181000 audit[2366]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:37.181000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffdea22030 a2=0 a3=1 items=0 ppid=2228 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:37.183000 audit[2371]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.183000 audit[2371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffebba2e0 a2=0 a3=1 items=0 ppid=2228 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 00:45:37.185000 audit[2373]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.185000 audit[2373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff7769840 a2=0 a3=1 items=0 ppid=2228 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.185000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 31 00:45:37.188000 audit[2376]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.188000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffec65d6f0 a2=0 a3=1 items=0 ppid=2228 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 31 00:45:37.189000 audit[2377]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.189000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa570490 a2=0 a3=1 items=0 ppid=2228 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 00:45:37.192000 audit[2379]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.192000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe4c21060 a2=0 a3=1 items=0 ppid=2228 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 00:45:37.193000 audit[2380]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.193000 audit[2380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc2ea940 a2=0 a3=1 items=0 ppid=2228 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 00:45:37.195000 audit[2382]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.195000 audit[2382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc7246b30 a2=0 a3=1 items=0 ppid=2228 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 31 00:45:37.198000 audit[2385]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.198000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffe4ff570 a2=0 a3=1 items=0 ppid=2228 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 00:45:37.199000 audit[2386]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.199000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd763f890 a2=0 a3=1 items=0 ppid=2228 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 00:45:37.201000 audit[2388]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.201000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd8a823f0 a2=0 a3=1 items=0 ppid=2228 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 00:45:37.202000 audit[2389]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.202000 audit[2389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc5f82a0 a2=0 a3=1 items=0 ppid=2228 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.202000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 00:45:37.204000 audit[2391]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.204000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffee7fe8a0 a2=0 a3=1 items=0 ppid=2228 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.204000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 00:45:37.208000 audit[2394]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.208000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd68ee4d0 a2=0 a3=1 items=0 ppid=2228 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.208000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 00:45:37.211000 audit[2397]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.211000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc30649d0 a2=0 a3=1 items=0 ppid=2228 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 31 00:45:37.212000 audit[2398]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.212000 audit[2398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdd168190 a2=0 a3=1 items=0 ppid=2228 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 00:45:37.214000 audit[2400]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.214000 audit[2400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffeba13e50 a2=0 a3=1 items=0 ppid=2228 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.214000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 00:45:37.217000 audit[2403]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.217000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc408e430 a2=0 a3=1 items=0 ppid=2228 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.217000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 00:45:37.219000 audit[2404]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.219000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1c964a0 a2=0 a3=1 items=0 ppid=2228 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 00:45:37.221000 audit[2406]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.221000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffedb88220 a2=0 a3=1 items=0 ppid=2228 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 00:45:37.222000 audit[2407]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.222000 audit[2407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda16b190 a2=0 a3=1 items=0 ppid=2228 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 00:45:37.224000 audit[2409]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.224000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc381e090 a2=0 a3=1 items=0 ppid=2228 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 00:45:37.227000 audit[2412]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 00:45:37.227000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffed4850a0 a2=0 a3=1 items=0 ppid=2228 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 00:45:37.230000 audit[2414]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 00:45:37.230000 audit[2414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd71418c0 a2=0 a3=1 items=0 ppid=2228 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.230000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:37.230000 audit[2414]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 00:45:37.230000 audit[2414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd71418c0 a2=0 a3=1 items=0 ppid=2228 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:37.230000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:37.450546 kubelet[2117]: E1031 00:45:37.450401 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:37.451478 kubelet[2117]: E1031 00:45:37.451027 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:37.465423 kubelet[2117]: I1031 00:45:37.465359 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kblzt" podStartSLOduration=1.465342465 podStartE2EDuration="1.465342465s" podCreationTimestamp="2025-10-31 00:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:45:37.465152369 +0000 UTC m=+8.158240086" watchObservedRunningTime="2025-10-31 00:45:37.465342465 +0000 UTC m=+8.158430222" Oct 31 00:45:38.313781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719162717.mount: Deactivated successfully. Oct 31 00:45:38.983190 env[1321]: time="2025-10-31T00:45:38.983119110Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:38.985331 env[1321]: time="2025-10-31T00:45:38.985277684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:38.987372 env[1321]: time="2025-10-31T00:45:38.987320460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:38.990041 env[1321]: time="2025-10-31T00:45:38.988931746Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:38.990041 env[1321]: time="2025-10-31T00:45:38.989348547Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 31 00:45:38.993780 env[1321]: time="2025-10-31T00:45:38.993499303Z" level=info msg="CreateContainer within sandbox \"649b77675148e09feacebdf6d7481afd93ed874b15c5035464c3f85a52e931a0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 00:45:39.004288 env[1321]: time="2025-10-31T00:45:39.004233997Z" level=info msg="CreateContainer within sandbox \"649b77675148e09feacebdf6d7481afd93ed874b15c5035464c3f85a52e931a0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"df511c4dc23cb48fd33037dd8789cc8c20c64abd80f74d8a201d2dece3ceeb91\"" Oct 31 00:45:39.004814 env[1321]: time="2025-10-31T00:45:39.004777865Z" level=info msg="StartContainer for \"df511c4dc23cb48fd33037dd8789cc8c20c64abd80f74d8a201d2dece3ceeb91\"" Oct 31 00:45:39.048447 env[1321]: time="2025-10-31T00:45:39.048369114Z" level=info msg="StartContainer for \"df511c4dc23cb48fd33037dd8789cc8c20c64abd80f74d8a201d2dece3ceeb91\" returns successfully" Oct 31 00:45:40.174044 kubelet[2117]: E1031 00:45:40.174015 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:40.184985 kubelet[2117]: I1031 00:45:40.184930 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-wmmm2" podStartSLOduration=2.126578851 podStartE2EDuration="4.18491253s" podCreationTimestamp="2025-10-31 00:45:36 +0000 UTC" firstStartedPulling="2025-10-31 00:45:36.932868236 +0000 UTC m=+7.625955953" lastFinishedPulling="2025-10-31 00:45:38.991201875 +0000 UTC m=+9.684289632" observedRunningTime="2025-10-31 00:45:39.463403795 +0000 UTC m=+10.156491552" watchObservedRunningTime="2025-10-31 00:45:40.18491253 +0000 UTC m=+10.878000287" Oct 31 00:45:41.638441 kubelet[2117]: E1031 00:45:41.637129 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:42.458917 kubelet[2117]: E1031 00:45:42.458882 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:43.005778 update_engine[1306]: I1031 00:45:43.005718 1306 update_attempter.cc:509] Updating boot flags... Oct 31 00:45:44.402665 kernel: kauditd_printk_skb: 143 callbacks suppressed Oct 31 00:45:44.402813 kernel: audit: type=1106 audit(1761871544.397:278): pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:44.402839 kernel: audit: type=1104 audit(1761871544.397:279): pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:44.397000 audit[1482]: USER_END pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:44.397000 audit[1482]: CRED_DISP pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 00:45:44.398537 sudo[1482]: pam_unix(sudo:session): session closed for user root Oct 31 00:45:44.408057 sshd[1476]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:44.407000 audit[1476]: USER_END pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:44.410403 systemd-logind[1305]: Session 7 logged out. Waiting for processes to exit. Oct 31 00:45:44.410651 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:47742.service: Deactivated successfully. Oct 31 00:45:44.411398 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 00:45:44.412036 systemd-logind[1305]: Removed session 7. Oct 31 00:45:44.407000 audit[1476]: CRED_DISP pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:44.416074 kernel: audit: type=1106 audit(1761871544.407:280): pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:44.416154 kernel: audit: type=1104 audit(1761871544.407:281): pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:45:44.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.54:22-10.0.0.1:47742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:44.419236 kernel: audit: type=1131 audit(1761871544.409:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.54:22-10.0.0.1:47742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:45:45.563000 audit[2521]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:45.566429 kernel: audit: type=1325 audit(1761871545.563:283): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:45.563000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe3a96f60 a2=0 a3=1 items=0 ppid=2228 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:45.563000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:45.575037 kernel: audit: type=1300 audit(1761871545.563:283): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe3a96f60 a2=0 a3=1 items=0 ppid=2228 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:45.575091 kernel: audit: type=1327 audit(1761871545.563:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:45.577000 audit[2521]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:45.580432 kernel: audit: type=1325 audit(1761871545.577:284): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:45.577000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe3a96f60 a2=0 a3=1 items=0 ppid=2228 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:45.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:45.586429 kernel: audit: type=1300 audit(1761871545.577:284): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe3a96f60 a2=0 a3=1 items=0 ppid=2228 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:45.599000 audit[2523]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:45.599000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff10f06b0 a2=0 a3=1 items=0 ppid=2228 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:45.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:45.606000 audit[2523]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:45.606000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff10f06b0 a2=0 a3=1 items=0 ppid=2228 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:45.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:49.513000 audit[2526]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:49.516050 kernel: kauditd_printk_skb: 7 callbacks suppressed Oct 31 00:45:49.516112 kernel: audit: type=1325 audit(1761871549.513:287): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:49.513000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd23bc9b0 a2=0 a3=1 items=0 ppid=2228 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:49.522376 kernel: audit: type=1300 audit(1761871549.513:287): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd23bc9b0 a2=0 a3=1 items=0 ppid=2228 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:49.522496 kernel: audit: type=1327 audit(1761871549.513:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:49.513000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:49.524000 audit[2526]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:49.524000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd23bc9b0 a2=0 a3=1 items=0 ppid=2228 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:49.533685 kernel: audit: type=1325 audit(1761871549.524:288): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:49.533805 kernel: audit: type=1300 audit(1761871549.524:288): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd23bc9b0 a2=0 a3=1 items=0 ppid=2228 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:49.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:49.544614 kernel: audit: type=1327 audit(1761871549.524:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:49.547000 audit[2528]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:49.547000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffefb18dc0 a2=0 a3=1 items=0 ppid=2228 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:49.555784 kernel: audit: type=1325 audit(1761871549.547:289): table=filter:95 family=2 entries=18 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:49.555852 kernel: audit: type=1300 audit(1761871549.547:289): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffefb18dc0 a2=0 a3=1 items=0 ppid=2228 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:49.555877 kernel: audit: type=1327 audit(1761871549.547:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:49.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:49.560000 audit[2528]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:49.560000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffefb18dc0 a2=0 a3=1 items=0 ppid=2228 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:49.560000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:49.565450 kernel: audit: type=1325 audit(1761871549.560:290): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:50.784000 audit[2530]: NETFILTER_CFG table=filter:97 family=2 entries=19 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:50.784000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc6f35a80 a2=0 a3=1 items=0 ppid=2228 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:50.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:50.805000 audit[2530]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:50.805000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc6f35a80 a2=0 a3=1 items=0 ppid=2228 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:50.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:51.818000 audit[2532]: NETFILTER_CFG table=filter:99 family=2 entries=20 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:51.818000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe646e4f0 a2=0 a3=1 items=0 ppid=2228 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:51.818000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:51.831000 audit[2532]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:51.831000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe646e4f0 a2=0 a3=1 items=0 ppid=2228 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:51.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:52.359668 kubelet[2117]: I1031 00:45:52.359622 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cncxq\" (UniqueName: \"kubernetes.io/projected/2b253862-079d-4a6f-b3f8-c9e5c5d736e7-kube-api-access-cncxq\") pod \"calico-typha-7984d9f79c-mqhkw\" (UID: \"2b253862-079d-4a6f-b3f8-c9e5c5d736e7\") " pod="calico-system/calico-typha-7984d9f79c-mqhkw" Oct 31 00:45:52.359668 kubelet[2117]: I1031 00:45:52.359668 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2b253862-079d-4a6f-b3f8-c9e5c5d736e7-typha-certs\") pod \"calico-typha-7984d9f79c-mqhkw\" (UID: \"2b253862-079d-4a6f-b3f8-c9e5c5d736e7\") " pod="calico-system/calico-typha-7984d9f79c-mqhkw" Oct 31 00:45:52.360121 kubelet[2117]: I1031 00:45:52.359691 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b253862-079d-4a6f-b3f8-c9e5c5d736e7-tigera-ca-bundle\") pod \"calico-typha-7984d9f79c-mqhkw\" (UID: \"2b253862-079d-4a6f-b3f8-c9e5c5d736e7\") " pod="calico-system/calico-typha-7984d9f79c-mqhkw" Oct 31 00:45:52.642062 kubelet[2117]: E1031 00:45:52.641959 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:52.643443 env[1321]: time="2025-10-31T00:45:52.642973729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7984d9f79c-mqhkw,Uid:2b253862-079d-4a6f-b3f8-c9e5c5d736e7,Namespace:calico-system,Attempt:0,}" Oct 31 00:45:52.665556 env[1321]: time="2025-10-31T00:45:52.665486199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:45:52.665556 env[1321]: time="2025-10-31T00:45:52.665529254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:45:52.665556 env[1321]: time="2025-10-31T00:45:52.665540617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:45:52.665854 env[1321]: time="2025-10-31T00:45:52.665820833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f46dfa0c6c43ba1d0f5910fda26e022d61fefe5f44717c457b646e1daee8cadb pid=2541 runtime=io.containerd.runc.v2 Oct 31 00:45:52.758621 env[1321]: time="2025-10-31T00:45:52.758577078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7984d9f79c-mqhkw,Uid:2b253862-079d-4a6f-b3f8-c9e5c5d736e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"f46dfa0c6c43ba1d0f5910fda26e022d61fefe5f44717c457b646e1daee8cadb\"" Oct 31 00:45:52.760801 kubelet[2117]: E1031 00:45:52.760766 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:52.761879 kubelet[2117]: I1031 00:45:52.761839 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ec20dd65-8434-41eb-a061-18cee3683d0b-node-certs\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.761879 kubelet[2117]: I1031 00:45:52.761879 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec20dd65-8434-41eb-a061-18cee3683d0b-var-lib-calico\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.761990 kubelet[2117]: I1031 00:45:52.761896 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ec20dd65-8434-41eb-a061-18cee3683d0b-cni-bin-dir\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.761990 kubelet[2117]: I1031 00:45:52.761912 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec20dd65-8434-41eb-a061-18cee3683d0b-lib-modules\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.761990 kubelet[2117]: I1031 00:45:52.761938 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec20dd65-8434-41eb-a061-18cee3683d0b-tigera-ca-bundle\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.761990 kubelet[2117]: I1031 00:45:52.761954 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec20dd65-8434-41eb-a061-18cee3683d0b-xtables-lock\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.761990 kubelet[2117]: I1031 00:45:52.761973 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ec20dd65-8434-41eb-a061-18cee3683d0b-flexvol-driver-host\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.762147 kubelet[2117]: I1031 00:45:52.761989 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ec20dd65-8434-41eb-a061-18cee3683d0b-var-run-calico\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.762147 kubelet[2117]: I1031 00:45:52.762022 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ec20dd65-8434-41eb-a061-18cee3683d0b-cni-log-dir\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.762147 kubelet[2117]: I1031 00:45:52.762048 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvfz\" (UniqueName: \"kubernetes.io/projected/ec20dd65-8434-41eb-a061-18cee3683d0b-kube-api-access-jcvfz\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.762147 kubelet[2117]: I1031 00:45:52.762085 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ec20dd65-8434-41eb-a061-18cee3683d0b-cni-net-dir\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.762147 kubelet[2117]: I1031 00:45:52.762105 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ec20dd65-8434-41eb-a061-18cee3683d0b-policysync\") pod \"calico-node-d66c4\" (UID: \"ec20dd65-8434-41eb-a061-18cee3683d0b\") " pod="calico-system/calico-node-d66c4" Oct 31 00:45:52.764331 env[1321]: time="2025-10-31T00:45:52.764293876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 00:45:52.847000 audit[2576]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:52.847000 audit[2576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe6db6ab0 a2=0 a3=1 items=0 ppid=2228 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:52.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:52.854000 audit[2576]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:45:52.854000 audit[2576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe6db6ab0 a2=0 a3=1 items=0 ppid=2228 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:45:52.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:45:52.866095 kubelet[2117]: E1031 00:45:52.866065 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.869567 kubelet[2117]: W1031 00:45:52.869457 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.870073 kubelet[2117]: E1031 00:45:52.869833 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.870073 kubelet[2117]: W1031 00:45:52.869852 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.870392 kubelet[2117]: E1031 00:45:52.870301 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.870392 kubelet[2117]: E1031 00:45:52.870334 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.874763 kubelet[2117]: E1031 00:45:52.874688 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.874763 kubelet[2117]: W1031 00:45:52.874707 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.874763 kubelet[2117]: E1031 00:45:52.874723 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.914119 kubelet[2117]: E1031 00:45:52.913623 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:45:52.947691 kubelet[2117]: E1031 00:45:52.947582 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.947691 kubelet[2117]: W1031 00:45:52.947606 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.947691 kubelet[2117]: E1031 00:45:52.947627 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.947971 kubelet[2117]: E1031 00:45:52.947887 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.947971 kubelet[2117]: W1031 00:45:52.947897 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.947971 kubelet[2117]: E1031 00:45:52.947937 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.948277 kubelet[2117]: E1031 00:45:52.948261 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.948277 kubelet[2117]: W1031 00:45:52.948276 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.948365 kubelet[2117]: E1031 00:45:52.948288 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.948473 kubelet[2117]: E1031 00:45:52.948463 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.948473 kubelet[2117]: W1031 00:45:52.948473 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.948548 kubelet[2117]: E1031 00:45:52.948482 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.948818 kubelet[2117]: E1031 00:45:52.948802 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.948818 kubelet[2117]: W1031 00:45:52.948817 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.948910 kubelet[2117]: E1031 00:45:52.948827 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.949246 kubelet[2117]: E1031 00:45:52.949043 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.949246 kubelet[2117]: W1031 00:45:52.949053 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.949246 kubelet[2117]: E1031 00:45:52.949063 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.949246 kubelet[2117]: E1031 00:45:52.949195 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.949246 kubelet[2117]: W1031 00:45:52.949203 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.949246 kubelet[2117]: E1031 00:45:52.949212 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.949442 kubelet[2117]: E1031 00:45:52.949427 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.949442 kubelet[2117]: W1031 00:45:52.949436 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.949564 kubelet[2117]: E1031 00:45:52.949445 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.949690 kubelet[2117]: E1031 00:45:52.949678 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.949690 kubelet[2117]: W1031 00:45:52.949689 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.949763 kubelet[2117]: E1031 00:45:52.949699 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.949936 kubelet[2117]: E1031 00:45:52.949924 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.949936 kubelet[2117]: W1031 00:45:52.949936 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.950030 kubelet[2117]: E1031 00:45:52.949945 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.950221 kubelet[2117]: E1031 00:45:52.950088 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.950221 kubelet[2117]: W1031 00:45:52.950096 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.950221 kubelet[2117]: E1031 00:45:52.950103 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.950331 kubelet[2117]: E1031 00:45:52.950313 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.950331 kubelet[2117]: W1031 00:45:52.950326 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.950390 kubelet[2117]: E1031 00:45:52.950335 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.950493 kubelet[2117]: E1031 00:45:52.950483 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.950493 kubelet[2117]: W1031 00:45:52.950493 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.950568 kubelet[2117]: E1031 00:45:52.950501 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.950649 kubelet[2117]: E1031 00:45:52.950640 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.950649 kubelet[2117]: W1031 00:45:52.950649 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.950782 kubelet[2117]: E1031 00:45:52.950658 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.950930 kubelet[2117]: E1031 00:45:52.950912 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.950930 kubelet[2117]: W1031 00:45:52.950921 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.950930 kubelet[2117]: E1031 00:45:52.950929 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.951160 kubelet[2117]: E1031 00:45:52.951147 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.951160 kubelet[2117]: W1031 00:45:52.951160 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.951228 kubelet[2117]: E1031 00:45:52.951169 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.951363 kubelet[2117]: E1031 00:45:52.951334 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.951363 kubelet[2117]: W1031 00:45:52.951342 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.951363 kubelet[2117]: E1031 00:45:52.951349 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.951517 kubelet[2117]: E1031 00:45:52.951506 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.951553 kubelet[2117]: W1031 00:45:52.951518 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.951553 kubelet[2117]: E1031 00:45:52.951529 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.951719 kubelet[2117]: E1031 00:45:52.951696 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.951765 kubelet[2117]: W1031 00:45:52.951703 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.951793 kubelet[2117]: E1031 00:45:52.951765 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.951968 kubelet[2117]: E1031 00:45:52.951959 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.951968 kubelet[2117]: W1031 00:45:52.951969 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.952042 kubelet[2117]: E1031 00:45:52.951977 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.963387 kubelet[2117]: E1031 00:45:52.963366 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.963526 kubelet[2117]: W1031 00:45:52.963511 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.963603 kubelet[2117]: E1031 00:45:52.963591 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.963692 kubelet[2117]: I1031 00:45:52.963676 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0bdf479-9385-4085-afb4-2cdc588aefd9-kubelet-dir\") pod \"csi-node-driver-25c9f\" (UID: \"c0bdf479-9385-4085-afb4-2cdc588aefd9\") " pod="calico-system/csi-node-driver-25c9f" Oct 31 00:45:52.963956 kubelet[2117]: E1031 00:45:52.963937 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.964018 kubelet[2117]: W1031 00:45:52.963955 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.964018 kubelet[2117]: E1031 00:45:52.963975 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.964164 kubelet[2117]: E1031 00:45:52.964151 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.964164 kubelet[2117]: W1031 00:45:52.964161 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.964216 kubelet[2117]: E1031 00:45:52.964175 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.964337 kubelet[2117]: E1031 00:45:52.964327 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.964363 kubelet[2117]: W1031 00:45:52.964337 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.964363 kubelet[2117]: E1031 00:45:52.964346 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.964421 kubelet[2117]: I1031 00:45:52.964367 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0bdf479-9385-4085-afb4-2cdc588aefd9-socket-dir\") pod \"csi-node-driver-25c9f\" (UID: \"c0bdf479-9385-4085-afb4-2cdc588aefd9\") " pod="calico-system/csi-node-driver-25c9f" Oct 31 00:45:52.964542 kubelet[2117]: E1031 00:45:52.964531 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.964567 kubelet[2117]: W1031 00:45:52.964542 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.964567 kubelet[2117]: E1031 00:45:52.964555 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.964616 kubelet[2117]: I1031 00:45:52.964569 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hgt8\" (UniqueName: \"kubernetes.io/projected/c0bdf479-9385-4085-afb4-2cdc588aefd9-kube-api-access-9hgt8\") pod \"csi-node-driver-25c9f\" (UID: \"c0bdf479-9385-4085-afb4-2cdc588aefd9\") " pod="calico-system/csi-node-driver-25c9f" Oct 31 00:45:52.964727 kubelet[2117]: E1031 00:45:52.964716 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.964752 kubelet[2117]: W1031 00:45:52.964727 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.964752 kubelet[2117]: E1031 00:45:52.964739 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.964810 kubelet[2117]: I1031 00:45:52.964753 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c0bdf479-9385-4085-afb4-2cdc588aefd9-varrun\") pod \"csi-node-driver-25c9f\" (UID: \"c0bdf479-9385-4085-afb4-2cdc588aefd9\") " pod="calico-system/csi-node-driver-25c9f" Oct 31 00:45:52.964909 kubelet[2117]: E1031 00:45:52.964895 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.964909 kubelet[2117]: W1031 00:45:52.964906 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.964974 kubelet[2117]: E1031 00:45:52.964919 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.964974 kubelet[2117]: I1031 00:45:52.964933 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0bdf479-9385-4085-afb4-2cdc588aefd9-registration-dir\") pod \"csi-node-driver-25c9f\" (UID: \"c0bdf479-9385-4085-afb4-2cdc588aefd9\") " pod="calico-system/csi-node-driver-25c9f" Oct 31 00:45:52.965141 kubelet[2117]: E1031 00:45:52.965128 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.965141 kubelet[2117]: W1031 00:45:52.965138 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.965200 kubelet[2117]: E1031 00:45:52.965155 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.965300 kubelet[2117]: E1031 00:45:52.965288 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.965300 kubelet[2117]: W1031 00:45:52.965298 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.965361 kubelet[2117]: E1031 00:45:52.965310 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.965496 kubelet[2117]: E1031 00:45:52.965484 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.965496 kubelet[2117]: W1031 00:45:52.965495 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.965568 kubelet[2117]: E1031 00:45:52.965508 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.965661 kubelet[2117]: E1031 00:45:52.965650 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.965661 kubelet[2117]: W1031 00:45:52.965659 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.965725 kubelet[2117]: E1031 00:45:52.965672 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.965841 kubelet[2117]: E1031 00:45:52.965828 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.965841 kubelet[2117]: W1031 00:45:52.965839 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.965907 kubelet[2117]: E1031 00:45:52.965852 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.966036 kubelet[2117]: E1031 00:45:52.966023 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.966083 kubelet[2117]: W1031 00:45:52.966050 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.966136 kubelet[2117]: E1031 00:45:52.966111 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.966252 kubelet[2117]: E1031 00:45:52.966241 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.966287 kubelet[2117]: W1031 00:45:52.966254 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.966287 kubelet[2117]: E1031 00:45:52.966264 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:52.966474 kubelet[2117]: E1031 00:45:52.966463 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:52.966515 kubelet[2117]: W1031 00:45:52.966475 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:52.966515 kubelet[2117]: E1031 00:45:52.966483 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.022374 kubelet[2117]: E1031 00:45:53.022340 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:53.022942 env[1321]: time="2025-10-31T00:45:53.022881112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d66c4,Uid:ec20dd65-8434-41eb-a061-18cee3683d0b,Namespace:calico-system,Attempt:0,}" Oct 31 00:45:53.043508 env[1321]: time="2025-10-31T00:45:53.043401930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:45:53.043508 env[1321]: time="2025-10-31T00:45:53.043474954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:45:53.043741 env[1321]: time="2025-10-31T00:45:53.043486598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:45:53.044082 env[1321]: time="2025-10-31T00:45:53.044043101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f1965a07c1ca952e0b8b20b2a8614a2986b9266f3b8ee53490c55226cdbc909 pid=2636 runtime=io.containerd.runc.v2 Oct 31 00:45:53.066008 kubelet[2117]: E1031 00:45:53.065947 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.066008 kubelet[2117]: W1031 00:45:53.065986 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.066008 kubelet[2117]: E1031 00:45:53.066008 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.066223 kubelet[2117]: E1031 00:45:53.066216 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.066249 kubelet[2117]: W1031 00:45:53.066225 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.066249 kubelet[2117]: E1031 00:45:53.066235 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.066463 kubelet[2117]: E1031 00:45:53.066449 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.066513 kubelet[2117]: W1031 00:45:53.066464 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.066513 kubelet[2117]: E1031 00:45:53.066479 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.066702 kubelet[2117]: E1031 00:45:53.066689 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.066702 kubelet[2117]: W1031 00:45:53.066700 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.066768 kubelet[2117]: E1031 00:45:53.066718 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.067004 kubelet[2117]: E1031 00:45:53.066976 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.067004 kubelet[2117]: W1031 00:45:53.066989 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.067098 kubelet[2117]: E1031 00:45:53.067020 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.067239 kubelet[2117]: E1031 00:45:53.067223 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.067239 kubelet[2117]: W1031 00:45:53.067235 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.067303 kubelet[2117]: E1031 00:45:53.067251 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.067411 kubelet[2117]: E1031 00:45:53.067399 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.067411 kubelet[2117]: W1031 00:45:53.067409 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.067524 kubelet[2117]: E1031 00:45:53.067458 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.067666 kubelet[2117]: E1031 00:45:53.067656 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.067666 kubelet[2117]: W1031 00:45:53.067666 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.067741 kubelet[2117]: E1031 00:45:53.067717 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.071707 kubelet[2117]: E1031 00:45:53.071686 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.071707 kubelet[2117]: W1031 00:45:53.071702 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.071869 kubelet[2117]: E1031 00:45:53.071752 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.072013 kubelet[2117]: E1031 00:45:53.071997 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.072013 kubelet[2117]: W1031 00:45:53.072010 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.072132 kubelet[2117]: E1031 00:45:53.072106 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.072229 kubelet[2117]: E1031 00:45:53.072209 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.072229 kubelet[2117]: W1031 00:45:53.072222 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.072345 kubelet[2117]: E1031 00:45:53.072321 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.072485 kubelet[2117]: E1031 00:45:53.072466 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.072485 kubelet[2117]: W1031 00:45:53.072478 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.072576 kubelet[2117]: E1031 00:45:53.072556 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.072699 kubelet[2117]: E1031 00:45:53.072681 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.072699 kubelet[2117]: W1031 00:45:53.072693 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.072757 kubelet[2117]: E1031 00:45:53.072734 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.072844 kubelet[2117]: E1031 00:45:53.072827 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.072844 kubelet[2117]: W1031 00:45:53.072838 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.072913 kubelet[2117]: E1031 00:45:53.072853 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.073024 kubelet[2117]: E1031 00:45:53.073007 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.073024 kubelet[2117]: W1031 00:45:53.073017 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.073024 kubelet[2117]: E1031 00:45:53.073036 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.073317 kubelet[2117]: E1031 00:45:53.073292 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.073317 kubelet[2117]: W1031 00:45:53.073308 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.073317 kubelet[2117]: E1031 00:45:53.073322 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.073588 kubelet[2117]: E1031 00:45:53.073546 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.073588 kubelet[2117]: W1031 00:45:53.073560 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.073588 kubelet[2117]: E1031 00:45:53.073576 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.073856 kubelet[2117]: E1031 00:45:53.073816 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.073856 kubelet[2117]: W1031 00:45:53.073829 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.074139 kubelet[2117]: E1031 00:45:53.073963 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.074249 kubelet[2117]: E1031 00:45:53.074229 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.074249 kubelet[2117]: W1031 00:45:53.074242 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.074336 kubelet[2117]: E1031 00:45:53.074309 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.074448 kubelet[2117]: E1031 00:45:53.074437 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.074448 kubelet[2117]: W1031 00:45:53.074447 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.074589 kubelet[2117]: E1031 00:45:53.074473 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.074843 kubelet[2117]: E1031 00:45:53.074823 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.074843 kubelet[2117]: W1031 00:45:53.074838 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.074916 kubelet[2117]: E1031 00:45:53.074854 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.075074 kubelet[2117]: E1031 00:45:53.075021 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.075074 kubelet[2117]: W1031 00:45:53.075041 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.075074 kubelet[2117]: E1031 00:45:53.075072 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.075259 kubelet[2117]: E1031 00:45:53.075245 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.075259 kubelet[2117]: W1031 00:45:53.075257 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.075354 kubelet[2117]: E1031 00:45:53.075281 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.077386 kubelet[2117]: E1031 00:45:53.075872 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.077386 kubelet[2117]: W1031 00:45:53.075889 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.077386 kubelet[2117]: E1031 00:45:53.075903 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.077817 kubelet[2117]: E1031 00:45:53.077786 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.077817 kubelet[2117]: W1031 00:45:53.077809 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.077918 kubelet[2117]: E1031 00:45:53.077825 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.092158 kubelet[2117]: E1031 00:45:53.092124 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:53.092158 kubelet[2117]: W1031 00:45:53.092152 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:53.092301 kubelet[2117]: E1031 00:45:53.092174 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:53.098060 env[1321]: time="2025-10-31T00:45:53.098012582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d66c4,Uid:ec20dd65-8434-41eb-a061-18cee3683d0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f1965a07c1ca952e0b8b20b2a8614a2986b9266f3b8ee53490c55226cdbc909\"" Oct 31 00:45:53.098930 kubelet[2117]: E1031 00:45:53.098741 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:53.881767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804550367.mount: Deactivated successfully. Oct 31 00:45:54.422798 kubelet[2117]: E1031 00:45:54.422463 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:45:54.727179 env[1321]: time="2025-10-31T00:45:54.727133224Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:54.729063 env[1321]: time="2025-10-31T00:45:54.729034423Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:54.730561 env[1321]: time="2025-10-31T00:45:54.730522412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:54.732482 env[1321]: time="2025-10-31T00:45:54.732450700Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:54.733129 env[1321]: time="2025-10-31T00:45:54.733104866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 31 00:45:54.739660 env[1321]: time="2025-10-31T00:45:54.739022811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 00:45:54.760007 env[1321]: time="2025-10-31T00:45:54.759501904Z" level=info msg="CreateContainer within sandbox \"f46dfa0c6c43ba1d0f5910fda26e022d61fefe5f44717c457b646e1daee8cadb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 00:45:54.802774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537601668.mount: Deactivated successfully. Oct 31 00:45:54.806283 env[1321]: time="2025-10-31T00:45:54.806239753Z" level=info msg="CreateContainer within sandbox \"f46dfa0c6c43ba1d0f5910fda26e022d61fefe5f44717c457b646e1daee8cadb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d0ee0eb3c005053039f67bc8daaf798054891e27e31204163b2151fa4867a1e2\"" Oct 31 00:45:54.806945 env[1321]: time="2025-10-31T00:45:54.806915526Z" level=info msg="StartContainer for \"d0ee0eb3c005053039f67bc8daaf798054891e27e31204163b2151fa4867a1e2\"" Oct 31 00:45:54.863904 env[1321]: time="2025-10-31T00:45:54.863641121Z" level=info msg="StartContainer for \"d0ee0eb3c005053039f67bc8daaf798054891e27e31204163b2151fa4867a1e2\" returns successfully" Oct 31 00:45:55.481613 kubelet[2117]: E1031 00:45:55.481581 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:55.569495 kubelet[2117]: E1031 00:45:55.569467 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.569495 kubelet[2117]: W1031 00:45:55.569488 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.570615 kubelet[2117]: E1031 00:45:55.570580 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.570839 kubelet[2117]: E1031 00:45:55.570825 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.570839 kubelet[2117]: W1031 00:45:55.570839 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.570914 kubelet[2117]: E1031 00:45:55.570851 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.571011 kubelet[2117]: E1031 00:45:55.571000 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.571047 kubelet[2117]: W1031 00:45:55.571011 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.571047 kubelet[2117]: E1031 00:45:55.571020 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.571230 kubelet[2117]: E1031 00:45:55.571220 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.571318 kubelet[2117]: W1031 00:45:55.571230 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.571318 kubelet[2117]: E1031 00:45:55.571240 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.571402 kubelet[2117]: E1031 00:45:55.571391 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.571402 kubelet[2117]: W1031 00:45:55.571402 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.571483 kubelet[2117]: E1031 00:45:55.571411 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.571575 kubelet[2117]: E1031 00:45:55.571565 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.571618 kubelet[2117]: W1031 00:45:55.571575 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.571618 kubelet[2117]: E1031 00:45:55.571583 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.571723 kubelet[2117]: E1031 00:45:55.571713 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.571760 kubelet[2117]: W1031 00:45:55.571724 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.571760 kubelet[2117]: E1031 00:45:55.571732 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.571878 kubelet[2117]: E1031 00:45:55.571869 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.571878 kubelet[2117]: W1031 00:45:55.571878 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.571940 kubelet[2117]: E1031 00:45:55.571886 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.572034 kubelet[2117]: E1031 00:45:55.572025 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.572079 kubelet[2117]: W1031 00:45:55.572035 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.572079 kubelet[2117]: E1031 00:45:55.572043 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.572172 kubelet[2117]: E1031 00:45:55.572163 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.572172 kubelet[2117]: W1031 00:45:55.572171 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.572238 kubelet[2117]: E1031 00:45:55.572179 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.572312 kubelet[2117]: E1031 00:45:55.572303 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.572312 kubelet[2117]: W1031 00:45:55.572312 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.572387 kubelet[2117]: E1031 00:45:55.572320 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.572471 kubelet[2117]: E1031 00:45:55.572462 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.572471 kubelet[2117]: W1031 00:45:55.572471 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.572541 kubelet[2117]: E1031 00:45:55.572479 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.573300 kubelet[2117]: E1031 00:45:55.573263 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.573300 kubelet[2117]: W1031 00:45:55.573286 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.573300 kubelet[2117]: E1031 00:45:55.573298 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.573488 kubelet[2117]: E1031 00:45:55.573475 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.573488 kubelet[2117]: W1031 00:45:55.573488 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.573548 kubelet[2117]: E1031 00:45:55.573499 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.573644 kubelet[2117]: E1031 00:45:55.573635 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.573644 kubelet[2117]: W1031 00:45:55.573644 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.573703 kubelet[2117]: E1031 00:45:55.573652 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.586600 kubelet[2117]: E1031 00:45:55.586480 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.586600 kubelet[2117]: W1031 00:45:55.586495 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.586600 kubelet[2117]: E1031 00:45:55.586507 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.586761 kubelet[2117]: E1031 00:45:55.586668 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.586761 kubelet[2117]: W1031 00:45:55.586675 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.586761 kubelet[2117]: E1031 00:45:55.586689 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.587007 kubelet[2117]: E1031 00:45:55.586978 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.587007 kubelet[2117]: W1031 00:45:55.586990 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.587007 kubelet[2117]: E1031 00:45:55.587001 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.587200 kubelet[2117]: E1031 00:45:55.587185 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.587200 kubelet[2117]: W1031 00:45:55.587200 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.587277 kubelet[2117]: E1031 00:45:55.587212 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.587499 kubelet[2117]: E1031 00:45:55.587483 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.587499 kubelet[2117]: W1031 00:45:55.587498 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.587576 kubelet[2117]: E1031 00:45:55.587514 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.587665 kubelet[2117]: E1031 00:45:55.587651 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.587665 kubelet[2117]: W1031 00:45:55.587663 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.587727 kubelet[2117]: E1031 00:45:55.587690 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.587815 kubelet[2117]: E1031 00:45:55.587796 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.587815 kubelet[2117]: W1031 00:45:55.587813 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.587877 kubelet[2117]: E1031 00:45:55.587851 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.587981 kubelet[2117]: E1031 00:45:55.587969 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.587981 kubelet[2117]: W1031 00:45:55.587978 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.588036 kubelet[2117]: E1031 00:45:55.587991 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.588145 kubelet[2117]: E1031 00:45:55.588135 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.588189 kubelet[2117]: W1031 00:45:55.588144 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.588189 kubelet[2117]: E1031 00:45:55.588156 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.588286 kubelet[2117]: E1031 00:45:55.588275 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.588286 kubelet[2117]: W1031 00:45:55.588284 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.588346 kubelet[2117]: E1031 00:45:55.588297 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.589147 kubelet[2117]: E1031 00:45:55.589120 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.589147 kubelet[2117]: W1031 00:45:55.589135 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.589224 kubelet[2117]: E1031 00:45:55.589150 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.589443 kubelet[2117]: E1031 00:45:55.589411 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.589443 kubelet[2117]: W1031 00:45:55.589442 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.589525 kubelet[2117]: E1031 00:45:55.589462 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.590424 kubelet[2117]: E1031 00:45:55.590382 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.590424 kubelet[2117]: W1031 00:45:55.590400 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.590522 kubelet[2117]: E1031 00:45:55.590430 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.590679 kubelet[2117]: E1031 00:45:55.590652 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.590679 kubelet[2117]: W1031 00:45:55.590666 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.590746 kubelet[2117]: E1031 00:45:55.590695 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.590903 kubelet[2117]: E1031 00:45:55.590874 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.590903 kubelet[2117]: W1031 00:45:55.590888 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.590967 kubelet[2117]: E1031 00:45:55.590940 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.591091 kubelet[2117]: E1031 00:45:55.591074 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.591091 kubelet[2117]: W1031 00:45:55.591089 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.591166 kubelet[2117]: E1031 00:45:55.591153 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.591951 kubelet[2117]: E1031 00:45:55.591935 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.591951 kubelet[2117]: W1031 00:45:55.591950 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.592009 kubelet[2117]: E1031 00:45:55.591962 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.592322 kubelet[2117]: E1031 00:45:55.592311 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:45:55.592358 kubelet[2117]: W1031 00:45:55.592323 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:45:55.592358 kubelet[2117]: E1031 00:45:55.592333 2117 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:45:55.822499 env[1321]: time="2025-10-31T00:45:55.822341455Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:55.825188 env[1321]: time="2025-10-31T00:45:55.825157708Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:55.826409 env[1321]: time="2025-10-31T00:45:55.826383159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:55.828720 env[1321]: time="2025-10-31T00:45:55.828410973Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:55.829164 env[1321]: time="2025-10-31T00:45:55.828684896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 31 00:45:55.832359 env[1321]: time="2025-10-31T00:45:55.832329079Z" level=info msg="CreateContainer within sandbox \"4f1965a07c1ca952e0b8b20b2a8614a2986b9266f3b8ee53490c55226cdbc909\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 00:45:55.843891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832525908.mount: Deactivated successfully. Oct 31 00:45:55.845400 env[1321]: time="2025-10-31T00:45:55.845367026Z" level=info msg="CreateContainer within sandbox \"4f1965a07c1ca952e0b8b20b2a8614a2986b9266f3b8ee53490c55226cdbc909\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"08f0ad35335ec54d65018291d22b74cba1c1b237f1c92a139acac545ea31276d\"" Oct 31 00:45:55.847117 env[1321]: time="2025-10-31T00:45:55.847084306Z" level=info msg="StartContainer for \"08f0ad35335ec54d65018291d22b74cba1c1b237f1c92a139acac545ea31276d\"" Oct 31 00:45:55.905448 env[1321]: time="2025-10-31T00:45:55.904880442Z" level=info msg="StartContainer for \"08f0ad35335ec54d65018291d22b74cba1c1b237f1c92a139acac545ea31276d\" returns successfully" Oct 31 00:45:55.938821 env[1321]: time="2025-10-31T00:45:55.938775943Z" level=info msg="shim disconnected" id=08f0ad35335ec54d65018291d22b74cba1c1b237f1c92a139acac545ea31276d Oct 31 00:45:55.938821 env[1321]: time="2025-10-31T00:45:55.938820797Z" level=warning msg="cleaning up after shim disconnected" id=08f0ad35335ec54d65018291d22b74cba1c1b237f1c92a139acac545ea31276d namespace=k8s.io Oct 31 00:45:55.939050 env[1321]: time="2025-10-31T00:45:55.938830080Z" level=info msg="cleaning up dead shim" Oct 31 00:45:55.945121 env[1321]: time="2025-10-31T00:45:55.945084773Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:45:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2820 runtime=io.containerd.runc.v2\n" Oct 31 00:45:56.421848 kubelet[2117]: E1031 00:45:56.421506 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:45:56.484598 kubelet[2117]: I1031 00:45:56.484549 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:45:56.485067 kubelet[2117]: E1031 00:45:56.485034 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:56.485360 kubelet[2117]: E1031 00:45:56.485326 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:56.486207 env[1321]: time="2025-10-31T00:45:56.486164937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 00:45:56.515016 kubelet[2117]: I1031 00:45:56.514575 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7984d9f79c-mqhkw" podStartSLOduration=2.540115144 podStartE2EDuration="4.514559963s" podCreationTimestamp="2025-10-31 00:45:52 +0000 UTC" firstStartedPulling="2025-10-31 00:45:52.763794905 +0000 UTC m=+23.456882662" lastFinishedPulling="2025-10-31 00:45:54.738239724 +0000 UTC m=+25.431327481" observedRunningTime="2025-10-31 00:45:55.49506826 +0000 UTC m=+26.188156017" watchObservedRunningTime="2025-10-31 00:45:56.514559963 +0000 UTC m=+27.207647720" Oct 31 00:45:56.752876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08f0ad35335ec54d65018291d22b74cba1c1b237f1c92a139acac545ea31276d-rootfs.mount: Deactivated successfully. Oct 31 00:45:58.422008 kubelet[2117]: E1031 00:45:58.421946 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:45:59.113055 env[1321]: time="2025-10-31T00:45:59.113009864Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:59.114648 env[1321]: time="2025-10-31T00:45:59.114616803Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:59.116076 env[1321]: time="2025-10-31T00:45:59.116049936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:59.117472 env[1321]: time="2025-10-31T00:45:59.117446459Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:45:59.117987 env[1321]: time="2025-10-31T00:45:59.117960713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 31 00:45:59.121700 env[1321]: time="2025-10-31T00:45:59.121370641Z" level=info msg="CreateContainer within sandbox \"4f1965a07c1ca952e0b8b20b2a8614a2986b9266f3b8ee53490c55226cdbc909\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 00:45:59.134482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033650175.mount: Deactivated successfully. Oct 31 00:45:59.136634 env[1321]: time="2025-10-31T00:45:59.136595086Z" level=info msg="CreateContainer within sandbox \"4f1965a07c1ca952e0b8b20b2a8614a2986b9266f3b8ee53490c55226cdbc909\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"efb29e45170ac1fbcb293dacf6f177ec1876cce46b35792d7fedbf4351688a01\"" Oct 31 00:45:59.138520 env[1321]: time="2025-10-31T00:45:59.137510044Z" level=info msg="StartContainer for \"efb29e45170ac1fbcb293dacf6f177ec1876cce46b35792d7fedbf4351688a01\"" Oct 31 00:45:59.201727 env[1321]: time="2025-10-31T00:45:59.201672994Z" level=info msg="StartContainer for \"efb29e45170ac1fbcb293dacf6f177ec1876cce46b35792d7fedbf4351688a01\" returns successfully" Oct 31 00:45:59.495100 kubelet[2117]: E1031 00:45:59.495055 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:59.965097 env[1321]: time="2025-10-31T00:45:59.965035748Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:45:59.985073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efb29e45170ac1fbcb293dacf6f177ec1876cce46b35792d7fedbf4351688a01-rootfs.mount: Deactivated successfully. Oct 31 00:45:59.991064 kubelet[2117]: I1031 00:45:59.990643 2117 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 00:45:59.992015 env[1321]: time="2025-10-31T00:45:59.991942195Z" level=info msg="shim disconnected" id=efb29e45170ac1fbcb293dacf6f177ec1876cce46b35792d7fedbf4351688a01 Oct 31 00:45:59.992226 env[1321]: time="2025-10-31T00:45:59.992194541Z" level=warning msg="cleaning up after shim disconnected" id=efb29e45170ac1fbcb293dacf6f177ec1876cce46b35792d7fedbf4351688a01 namespace=k8s.io Oct 31 00:45:59.992398 env[1321]: time="2025-10-31T00:45:59.992380989Z" level=info msg="cleaning up dead shim" Oct 31 00:46:00.004265 env[1321]: time="2025-10-31T00:46:00.004222799Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:45:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2892 runtime=io.containerd.runc.v2\n" Oct 31 00:46:00.130784 kubelet[2117]: I1031 00:46:00.130715 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/542c9f03-90da-4571-a183-2191a31bfb63-goldmane-key-pair\") pod \"goldmane-666569f655-tzh9k\" (UID: \"542c9f03-90da-4571-a183-2191a31bfb63\") " pod="calico-system/goldmane-666569f655-tzh9k" Oct 31 00:46:00.130784 kubelet[2117]: I1031 00:46:00.130770 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfz7r\" (UniqueName: \"kubernetes.io/projected/542c9f03-90da-4571-a183-2191a31bfb63-kube-api-access-wfz7r\") pod \"goldmane-666569f655-tzh9k\" (UID: \"542c9f03-90da-4571-a183-2191a31bfb63\") " pod="calico-system/goldmane-666569f655-tzh9k" Oct 31 00:46:00.130784 kubelet[2117]: I1031 00:46:00.130790 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542c9f03-90da-4571-a183-2191a31bfb63-config\") pod \"goldmane-666569f655-tzh9k\" (UID: \"542c9f03-90da-4571-a183-2191a31bfb63\") " pod="calico-system/goldmane-666569f655-tzh9k" Oct 31 00:46:00.131011 kubelet[2117]: I1031 00:46:00.130820 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09069f0b-a951-47c9-a38b-43b3cfe8a3b6-config-volume\") pod \"coredns-668d6bf9bc-rrkhq\" (UID: \"09069f0b-a951-47c9-a38b-43b3cfe8a3b6\") " pod="kube-system/coredns-668d6bf9bc-rrkhq" Oct 31 00:46:00.131011 kubelet[2117]: I1031 00:46:00.130868 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/68befc49-9413-4be7-9089-5bb6c17bda13-calico-apiserver-certs\") pod \"calico-apiserver-94987b775-7bbdc\" (UID: \"68befc49-9413-4be7-9089-5bb6c17bda13\") " pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" Oct 31 00:46:00.131011 kubelet[2117]: I1031 00:46:00.130912 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c-tigera-ca-bundle\") pod \"calico-kube-controllers-67c5c54685-nbdhs\" (UID: \"40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c\") " pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" Oct 31 00:46:00.131011 kubelet[2117]: I1031 00:46:00.130935 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxqg\" (UniqueName: \"kubernetes.io/projected/ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715-kube-api-access-wsxqg\") pod \"coredns-668d6bf9bc-wsg2w\" (UID: \"ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715\") " pod="kube-system/coredns-668d6bf9bc-wsg2w" Oct 31 00:46:00.131011 kubelet[2117]: I1031 00:46:00.130964 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3387b160-8e0e-4dce-9d1c-94a068df8ae3-whisker-ca-bundle\") pod \"whisker-689d974798-k2nfx\" (UID: \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\") " pod="calico-system/whisker-689d974798-k2nfx" Oct 31 00:46:00.131163 kubelet[2117]: I1031 00:46:00.130995 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/07e12617-5c5d-4e42-9bef-37ca707707aa-calico-apiserver-certs\") pod \"calico-apiserver-94987b775-fhccb\" (UID: \"07e12617-5c5d-4e42-9bef-37ca707707aa\") " pod="calico-apiserver/calico-apiserver-94987b775-fhccb" Oct 31 00:46:00.131163 kubelet[2117]: I1031 00:46:00.131014 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3387b160-8e0e-4dce-9d1c-94a068df8ae3-whisker-backend-key-pair\") pod \"whisker-689d974798-k2nfx\" (UID: \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\") " pod="calico-system/whisker-689d974798-k2nfx" Oct 31 00:46:00.131163 kubelet[2117]: I1031 00:46:00.131040 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/542c9f03-90da-4571-a183-2191a31bfb63-goldmane-ca-bundle\") pod \"goldmane-666569f655-tzh9k\" (UID: \"542c9f03-90da-4571-a183-2191a31bfb63\") " pod="calico-system/goldmane-666569f655-tzh9k" Oct 31 00:46:00.131163 kubelet[2117]: I1031 00:46:00.131058 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvxnx\" (UniqueName: \"kubernetes.io/projected/07e12617-5c5d-4e42-9bef-37ca707707aa-kube-api-access-vvxnx\") pod \"calico-apiserver-94987b775-fhccb\" (UID: \"07e12617-5c5d-4e42-9bef-37ca707707aa\") " pod="calico-apiserver/calico-apiserver-94987b775-fhccb" Oct 31 00:46:00.131163 kubelet[2117]: I1031 00:46:00.131086 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4rvk\" (UniqueName: \"kubernetes.io/projected/3387b160-8e0e-4dce-9d1c-94a068df8ae3-kube-api-access-b4rvk\") pod \"whisker-689d974798-k2nfx\" (UID: \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\") " pod="calico-system/whisker-689d974798-k2nfx" Oct 31 00:46:00.131335 kubelet[2117]: I1031 00:46:00.131103 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5v2l\" (UniqueName: \"kubernetes.io/projected/40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c-kube-api-access-d5v2l\") pod \"calico-kube-controllers-67c5c54685-nbdhs\" (UID: \"40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c\") " pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" Oct 31 00:46:00.131335 kubelet[2117]: I1031 00:46:00.131119 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715-config-volume\") pod \"coredns-668d6bf9bc-wsg2w\" (UID: \"ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715\") " pod="kube-system/coredns-668d6bf9bc-wsg2w" Oct 31 00:46:00.131335 kubelet[2117]: I1031 00:46:00.131145 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zphqd\" (UniqueName: \"kubernetes.io/projected/68befc49-9413-4be7-9089-5bb6c17bda13-kube-api-access-zphqd\") pod \"calico-apiserver-94987b775-7bbdc\" (UID: \"68befc49-9413-4be7-9089-5bb6c17bda13\") " pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" Oct 31 00:46:00.131335 kubelet[2117]: I1031 00:46:00.131159 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59cq\" (UniqueName: \"kubernetes.io/projected/09069f0b-a951-47c9-a38b-43b3cfe8a3b6-kube-api-access-k59cq\") pod \"coredns-668d6bf9bc-rrkhq\" (UID: \"09069f0b-a951-47c9-a38b-43b3cfe8a3b6\") " pod="kube-system/coredns-668d6bf9bc-rrkhq" Oct 31 00:46:00.320056 kubelet[2117]: E1031 00:46:00.319942 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:00.321751 env[1321]: time="2025-10-31T00:46:00.321705067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wsg2w,Uid:ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715,Namespace:kube-system,Attempt:0,}" Oct 31 00:46:00.325605 env[1321]: time="2025-10-31T00:46:00.325392514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c5c54685-nbdhs,Uid:40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c,Namespace:calico-system,Attempt:0,}" Oct 31 00:46:00.326077 env[1321]: time="2025-10-31T00:46:00.325887998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-689d974798-k2nfx,Uid:3387b160-8e0e-4dce-9d1c-94a068df8ae3,Namespace:calico-system,Attempt:0,}" Oct 31 00:46:00.327097 kubelet[2117]: E1031 00:46:00.327057 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:00.328118 env[1321]: time="2025-10-31T00:46:00.328033018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rrkhq,Uid:09069f0b-a951-47c9-a38b-43b3cfe8a3b6,Namespace:kube-system,Attempt:0,}" Oct 31 00:46:00.331050 env[1321]: time="2025-10-31T00:46:00.330870451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94987b775-7bbdc,Uid:68befc49-9413-4be7-9089-5bb6c17bda13,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:46:00.332874 env[1321]: time="2025-10-31T00:46:00.332669704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94987b775-fhccb,Uid:07e12617-5c5d-4e42-9bef-37ca707707aa,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:46:00.335332 env[1321]: time="2025-10-31T00:46:00.335032418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tzh9k,Uid:542c9f03-90da-4571-a183-2191a31bfb63,Namespace:calico-system,Attempt:0,}" Oct 31 00:46:00.435754 env[1321]: time="2025-10-31T00:46:00.435705491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-25c9f,Uid:c0bdf479-9385-4085-afb4-2cdc588aefd9,Namespace:calico-system,Attempt:0,}" Oct 31 00:46:00.500884 kubelet[2117]: E1031 00:46:00.499953 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:00.501941 env[1321]: time="2025-10-31T00:46:00.501906657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 00:46:00.518537 env[1321]: time="2025-10-31T00:46:00.518473142Z" level=error msg="Failed to destroy network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.519065 env[1321]: time="2025-10-31T00:46:00.519031523Z" level=error msg="encountered an error cleaning up failed sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.519197 env[1321]: time="2025-10-31T00:46:00.519169717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wsg2w,Uid:ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.520329 kubelet[2117]: E1031 00:46:00.520218 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.520993 kubelet[2117]: E1031 00:46:00.520785 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wsg2w" Oct 31 00:46:00.520993 kubelet[2117]: E1031 00:46:00.520828 2117 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wsg2w" Oct 31 00:46:00.520993 kubelet[2117]: E1031 00:46:00.520888 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wsg2w_kube-system(ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wsg2w_kube-system(ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wsg2w" podUID="ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715" Oct 31 00:46:00.551354 env[1321]: time="2025-10-31T00:46:00.551297596Z" level=error msg="Failed to destroy network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.551686 env[1321]: time="2025-10-31T00:46:00.551653245Z" level=error msg="encountered an error cleaning up failed sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.551759 env[1321]: time="2025-10-31T00:46:00.551704178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tzh9k,Uid:542c9f03-90da-4571-a183-2191a31bfb63,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.551972 kubelet[2117]: E1031 00:46:00.551933 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.552037 kubelet[2117]: E1031 00:46:00.551996 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tzh9k" Oct 31 00:46:00.552037 kubelet[2117]: E1031 00:46:00.552018 2117 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tzh9k" Oct 31 00:46:00.552102 kubelet[2117]: E1031 00:46:00.552062 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-tzh9k_calico-system(542c9f03-90da-4571-a183-2191a31bfb63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-tzh9k_calico-system(542c9f03-90da-4571-a183-2191a31bfb63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-tzh9k" podUID="542c9f03-90da-4571-a183-2191a31bfb63" Oct 31 00:46:00.553859 env[1321]: time="2025-10-31T00:46:00.553810267Z" level=error msg="Failed to destroy network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.554265 env[1321]: time="2025-10-31T00:46:00.554229373Z" level=error msg="encountered an error cleaning up failed sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.554337 env[1321]: time="2025-10-31T00:46:00.554278785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c5c54685-nbdhs,Uid:40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.554630 kubelet[2117]: E1031 00:46:00.554506 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.554630 kubelet[2117]: E1031 00:46:00.554547 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" Oct 31 00:46:00.554630 kubelet[2117]: E1031 00:46:00.554563 2117 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" Oct 31 00:46:00.555191 kubelet[2117]: E1031 00:46:00.554591 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67c5c54685-nbdhs_calico-system(40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67c5c54685-nbdhs_calico-system(40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" podUID="40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c" Oct 31 00:46:00.556209 env[1321]: time="2025-10-31T00:46:00.555663213Z" level=error msg="Failed to destroy network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.556606 env[1321]: time="2025-10-31T00:46:00.556463694Z" level=error msg="Failed to destroy network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.557289 env[1321]: time="2025-10-31T00:46:00.557254013Z" level=error msg="encountered an error cleaning up failed sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.557634 env[1321]: time="2025-10-31T00:46:00.557300265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94987b775-fhccb,Uid:07e12617-5c5d-4e42-9bef-37ca707707aa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.557716 kubelet[2117]: E1031 00:46:00.557605 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.557716 kubelet[2117]: E1031 00:46:00.557641 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" Oct 31 00:46:00.557716 kubelet[2117]: E1031 00:46:00.557658 2117 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" Oct 31 00:46:00.559135 kubelet[2117]: E1031 00:46:00.557694 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94987b775-fhccb_calico-apiserver(07e12617-5c5d-4e42-9bef-37ca707707aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94987b775-fhccb_calico-apiserver(07e12617-5c5d-4e42-9bef-37ca707707aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" podUID="07e12617-5c5d-4e42-9bef-37ca707707aa" Oct 31 00:46:00.559135 kubelet[2117]: E1031 00:46:00.558873 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.559135 kubelet[2117]: E1031 00:46:00.558908 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rrkhq" Oct 31 00:46:00.559281 env[1321]: time="2025-10-31T00:46:00.558090223Z" level=error msg="encountered an error cleaning up failed sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.559281 env[1321]: time="2025-10-31T00:46:00.558143477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rrkhq,Uid:09069f0b-a951-47c9-a38b-43b3cfe8a3b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.559281 env[1321]: time="2025-10-31T00:46:00.558517251Z" level=error msg="Failed to destroy network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.559281 env[1321]: time="2025-10-31T00:46:00.558844853Z" level=error msg="encountered an error cleaning up failed sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.559281 env[1321]: time="2025-10-31T00:46:00.558901387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-689d974798-k2nfx,Uid:3387b160-8e0e-4dce-9d1c-94a068df8ae3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.559480 kubelet[2117]: E1031 00:46:00.558927 2117 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rrkhq" Oct 31 00:46:00.559480 kubelet[2117]: E1031 00:46:00.558953 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rrkhq_kube-system(09069f0b-a951-47c9-a38b-43b3cfe8a3b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rrkhq_kube-system(09069f0b-a951-47c9-a38b-43b3cfe8a3b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rrkhq" podUID="09069f0b-a951-47c9-a38b-43b3cfe8a3b6" Oct 31 00:46:00.559480 kubelet[2117]: E1031 00:46:00.559013 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.559714 kubelet[2117]: E1031 00:46:00.559031 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-689d974798-k2nfx" Oct 31 00:46:00.559714 kubelet[2117]: E1031 00:46:00.559615 2117 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-689d974798-k2nfx" Oct 31 00:46:00.559714 kubelet[2117]: E1031 00:46:00.559657 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-689d974798-k2nfx_calico-system(3387b160-8e0e-4dce-9d1c-94a068df8ae3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-689d974798-k2nfx_calico-system(3387b160-8e0e-4dce-9d1c-94a068df8ae3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-689d974798-k2nfx" podUID="3387b160-8e0e-4dce-9d1c-94a068df8ae3" Oct 31 00:46:00.566901 env[1321]: time="2025-10-31T00:46:00.566559193Z" level=error msg="Failed to destroy network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.567025 env[1321]: time="2025-10-31T00:46:00.566925205Z" level=error msg="encountered an error cleaning up failed sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.567025 env[1321]: time="2025-10-31T00:46:00.566969296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94987b775-7bbdc,Uid:68befc49-9413-4be7-9089-5bb6c17bda13,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.567299 kubelet[2117]: E1031 00:46:00.567229 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.567370 kubelet[2117]: E1031 00:46:00.567303 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" Oct 31 00:46:00.567370 kubelet[2117]: E1031 00:46:00.567333 2117 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" Oct 31 00:46:00.567458 kubelet[2117]: E1031 00:46:00.567375 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94987b775-7bbdc_calico-apiserver(68befc49-9413-4be7-9089-5bb6c17bda13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94987b775-7bbdc_calico-apiserver(68befc49-9413-4be7-9089-5bb6c17bda13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" podUID="68befc49-9413-4be7-9089-5bb6c17bda13" Oct 31 00:46:00.579583 env[1321]: time="2025-10-31T00:46:00.579495366Z" level=error msg="Failed to destroy network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.580011 env[1321]: time="2025-10-31T00:46:00.579978047Z" level=error msg="encountered an error cleaning up failed sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.580175 env[1321]: time="2025-10-31T00:46:00.580103799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-25c9f,Uid:c0bdf479-9385-4085-afb4-2cdc588aefd9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.580583 kubelet[2117]: E1031 00:46:00.580548 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:00.580651 kubelet[2117]: E1031 00:46:00.580606 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-25c9f" Oct 31 00:46:00.580651 kubelet[2117]: E1031 00:46:00.580626 2117 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-25c9f" Oct 31 00:46:00.580714 kubelet[2117]: E1031 00:46:00.580661 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-25c9f_calico-system(c0bdf479-9385-4085-afb4-2cdc588aefd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-25c9f_calico-system(c0bdf479-9385-4085-afb4-2cdc588aefd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:46:01.502723 env[1321]: time="2025-10-31T00:46:01.502681192Z" level=info msg="StopPodSandbox for \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\"" Oct 31 00:46:01.503059 kubelet[2117]: I1031 00:46:01.503040 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:01.504615 kubelet[2117]: I1031 00:46:01.504570 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:01.505189 env[1321]: time="2025-10-31T00:46:01.505091658Z" level=info msg="StopPodSandbox for \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\"" Oct 31 00:46:01.506374 kubelet[2117]: I1031 00:46:01.506340 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:01.506835 env[1321]: time="2025-10-31T00:46:01.506802954Z" level=info msg="StopPodSandbox for \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\"" Oct 31 00:46:01.508202 kubelet[2117]: I1031 00:46:01.508025 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:01.508823 env[1321]: time="2025-10-31T00:46:01.508563582Z" level=info msg="StopPodSandbox for \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\"" Oct 31 00:46:01.509865 kubelet[2117]: I1031 00:46:01.509838 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:01.511723 env[1321]: time="2025-10-31T00:46:01.511691382Z" level=info msg="StopPodSandbox for \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\"" Oct 31 00:46:01.512435 kubelet[2117]: I1031 00:46:01.512391 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:01.513077 env[1321]: time="2025-10-31T00:46:01.513037069Z" level=info msg="StopPodSandbox for \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\"" Oct 31 00:46:01.515165 kubelet[2117]: I1031 00:46:01.515104 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:01.516373 env[1321]: time="2025-10-31T00:46:01.516318587Z" level=info msg="StopPodSandbox for \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\"" Oct 31 00:46:01.517113 kubelet[2117]: I1031 00:46:01.517088 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:01.517772 env[1321]: time="2025-10-31T00:46:01.517623144Z" level=info msg="StopPodSandbox for \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\"" Oct 31 00:46:01.545676 env[1321]: time="2025-10-31T00:46:01.545618267Z" level=error msg="StopPodSandbox for \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\" failed" error="failed to destroy network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:01.547288 kubelet[2117]: E1031 00:46:01.547242 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:01.548569 kubelet[2117]: E1031 00:46:01.548510 2117 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee"} Oct 31 00:46:01.548662 kubelet[2117]: E1031 00:46:01.548588 2117 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68befc49-9413-4be7-9089-5bb6c17bda13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:46:01.548662 kubelet[2117]: E1031 00:46:01.548610 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68befc49-9413-4be7-9089-5bb6c17bda13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" podUID="68befc49-9413-4be7-9089-5bb6c17bda13" Oct 31 00:46:01.548821 env[1321]: time="2025-10-31T00:46:01.548778275Z" level=error msg="StopPodSandbox for \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\" failed" error="failed to destroy network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:01.548965 kubelet[2117]: E1031 00:46:01.548932 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:01.549018 kubelet[2117]: E1031 00:46:01.548965 2117 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c"} Oct 31 00:46:01.549018 kubelet[2117]: E1031 00:46:01.548986 2117 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09069f0b-a951-47c9-a38b-43b3cfe8a3b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:46:01.549018 kubelet[2117]: E1031 00:46:01.549004 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09069f0b-a951-47c9-a38b-43b3cfe8a3b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rrkhq" podUID="09069f0b-a951-47c9-a38b-43b3cfe8a3b6" Oct 31 00:46:01.560894 env[1321]: time="2025-10-31T00:46:01.560842327Z" level=error msg="StopPodSandbox for \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\" failed" error="failed to destroy network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:01.561230 kubelet[2117]: E1031 00:46:01.561193 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:01.561302 kubelet[2117]: E1031 00:46:01.561238 2117 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6"} Oct 31 00:46:01.561302 kubelet[2117]: E1031 00:46:01.561275 2117 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:46:01.561452 kubelet[2117]: E1031 00:46:01.561296 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wsg2w" podUID="ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715" Oct 31 00:46:01.569250 env[1321]: time="2025-10-31T00:46:01.569195997Z" level=error msg="StopPodSandbox for \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\" failed" error="failed to destroy network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:01.569686 kubelet[2117]: E1031 00:46:01.569639 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:01.569782 kubelet[2117]: E1031 00:46:01.569699 2117 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48"} Oct 31 00:46:01.569782 kubelet[2117]: E1031 00:46:01.569732 2117 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:46:01.569782 kubelet[2117]: E1031 00:46:01.569755 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-689d974798-k2nfx" podUID="3387b160-8e0e-4dce-9d1c-94a068df8ae3" Oct 31 00:46:01.570916 env[1321]: time="2025-10-31T00:46:01.570876645Z" level=error msg="StopPodSandbox for \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\" failed" error="failed to destroy network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:01.571200 kubelet[2117]: E1031 00:46:01.571168 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:01.571260 kubelet[2117]: E1031 00:46:01.571200 2117 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9"} Oct 31 00:46:01.571260 kubelet[2117]: E1031 00:46:01.571225 2117 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:46:01.571260 kubelet[2117]: E1031 00:46:01.571241 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" podUID="40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c" Oct 31 00:46:01.574364 env[1321]: time="2025-10-31T00:46:01.574311840Z" level=error msg="StopPodSandbox for \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\" failed" error="failed to destroy network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:01.574745 kubelet[2117]: E1031 00:46:01.574679 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:01.574820 kubelet[2117]: E1031 00:46:01.574752 2117 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873"} Oct 31 00:46:01.574820 kubelet[2117]: E1031 00:46:01.574779 2117 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07e12617-5c5d-4e42-9bef-37ca707707aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:46:01.574820 kubelet[2117]: E1031 00:46:01.574797 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07e12617-5c5d-4e42-9bef-37ca707707aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" podUID="07e12617-5c5d-4e42-9bef-37ca707707aa" Oct 31 00:46:01.582307 env[1321]: time="2025-10-31T00:46:01.582217322Z" level=error msg="StopPodSandbox for \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\" failed" error="failed to destroy network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:01.582508 kubelet[2117]: E1031 00:46:01.582463 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:01.582561 kubelet[2117]: E1031 00:46:01.582512 2117 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799"} Oct 31 00:46:01.582561 kubelet[2117]: E1031 00:46:01.582543 2117 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c0bdf479-9385-4085-afb4-2cdc588aefd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:46:01.582643 kubelet[2117]: E1031 00:46:01.582562 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c0bdf479-9385-4085-afb4-2cdc588aefd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:46:01.586617 env[1321]: time="2025-10-31T00:46:01.586562898Z" level=error msg="StopPodSandbox for \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\" failed" error="failed to destroy network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:46:01.587554 kubelet[2117]: E1031 00:46:01.587519 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:01.587627 kubelet[2117]: E1031 00:46:01.587561 2117 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3"} Oct 31 00:46:01.587627 kubelet[2117]: E1031 00:46:01.587586 2117 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"542c9f03-90da-4571-a183-2191a31bfb63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:46:01.587627 kubelet[2117]: E1031 00:46:01.587606 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"542c9f03-90da-4571-a183-2191a31bfb63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-tzh9k" podUID="542c9f03-90da-4571-a183-2191a31bfb63" Oct 31 00:46:06.154755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656194522.mount: Deactivated successfully. Oct 31 00:46:06.416089 env[1321]: time="2025-10-31T00:46:06.415980810Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:46:06.422283 env[1321]: time="2025-10-31T00:46:06.422229631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:46:06.427360 env[1321]: time="2025-10-31T00:46:06.427312890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:46:06.431644 env[1321]: time="2025-10-31T00:46:06.431602863Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:46:06.432451 env[1321]: time="2025-10-31T00:46:06.432398509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 31 00:46:06.452444 env[1321]: time="2025-10-31T00:46:06.452378548Z" level=info msg="CreateContainer within sandbox \"4f1965a07c1ca952e0b8b20b2a8614a2986b9266f3b8ee53490c55226cdbc909\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 00:46:06.473719 env[1321]: time="2025-10-31T00:46:06.473626212Z" level=info msg="CreateContainer within sandbox \"4f1965a07c1ca952e0b8b20b2a8614a2986b9266f3b8ee53490c55226cdbc909\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9140837899ca4f2d218b6d7607eaeb24a547062af74867e0bae1139788719395\"" Oct 31 00:46:06.475843 env[1321]: time="2025-10-31T00:46:06.475815988Z" level=info msg="StartContainer for \"9140837899ca4f2d218b6d7607eaeb24a547062af74867e0bae1139788719395\"" Oct 31 00:46:06.644182 env[1321]: time="2025-10-31T00:46:06.644032170Z" level=info msg="StartContainer for \"9140837899ca4f2d218b6d7607eaeb24a547062af74867e0bae1139788719395\" returns successfully" Oct 31 00:46:06.675330 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 00:46:06.675485 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 00:46:07.008932 env[1321]: time="2025-10-31T00:46:07.008879728Z" level=info msg="StopPodSandbox for \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\"" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.118 [INFO][3396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.119 [INFO][3396] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" iface="eth0" netns="/var/run/netns/cni-6b472228-9785-01aa-c5d2-6796872fd8c4" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.120 [INFO][3396] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" iface="eth0" netns="/var/run/netns/cni-6b472228-9785-01aa-c5d2-6796872fd8c4" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.121 [INFO][3396] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" iface="eth0" netns="/var/run/netns/cni-6b472228-9785-01aa-c5d2-6796872fd8c4" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.121 [INFO][3396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.121 [INFO][3396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.208 [INFO][3408] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" HandleID="k8s-pod-network.9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Workload="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.208 [INFO][3408] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.208 [INFO][3408] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.220 [WARNING][3408] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" HandleID="k8s-pod-network.9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Workload="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.220 [INFO][3408] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" HandleID="k8s-pod-network.9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Workload="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.222 [INFO][3408] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:07.227402 env[1321]: 2025-10-31 00:46:07.225 [INFO][3396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:07.234867 env[1321]: time="2025-10-31T00:46:07.230021666Z" level=info msg="TearDown network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\" successfully" Oct 31 00:46:07.234867 env[1321]: time="2025-10-31T00:46:07.230066315Z" level=info msg="StopPodSandbox for \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\" returns successfully" Oct 31 00:46:07.230076 systemd[1]: run-netns-cni\x2d6b472228\x2d9785\x2d01aa\x2dc5d2\x2d6796872fd8c4.mount: Deactivated successfully. Oct 31 00:46:07.288020 kubelet[2117]: I1031 00:46:07.287908 2117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3387b160-8e0e-4dce-9d1c-94a068df8ae3-whisker-ca-bundle\") pod \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\" (UID: \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\") " Oct 31 00:46:07.288020 kubelet[2117]: I1031 00:46:07.287954 2117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4rvk\" (UniqueName: \"kubernetes.io/projected/3387b160-8e0e-4dce-9d1c-94a068df8ae3-kube-api-access-b4rvk\") pod \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\" (UID: \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\") " Oct 31 00:46:07.288020 kubelet[2117]: I1031 00:46:07.287982 2117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3387b160-8e0e-4dce-9d1c-94a068df8ae3-whisker-backend-key-pair\") pod \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\" (UID: \"3387b160-8e0e-4dce-9d1c-94a068df8ae3\") " Oct 31 00:46:07.291161 kubelet[2117]: I1031 00:46:07.291095 2117 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3387b160-8e0e-4dce-9d1c-94a068df8ae3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3387b160-8e0e-4dce-9d1c-94a068df8ae3" (UID: "3387b160-8e0e-4dce-9d1c-94a068df8ae3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:46:07.293443 kubelet[2117]: I1031 00:46:07.291735 2117 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3387b160-8e0e-4dce-9d1c-94a068df8ae3-kube-api-access-b4rvk" (OuterVolumeSpecName: "kube-api-access-b4rvk") pod "3387b160-8e0e-4dce-9d1c-94a068df8ae3" (UID: "3387b160-8e0e-4dce-9d1c-94a068df8ae3"). InnerVolumeSpecName "kube-api-access-b4rvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:46:07.293531 systemd[1]: var-lib-kubelet-pods-3387b160\x2d8e0e\x2d4dce\x2d9d1c\x2d94a068df8ae3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db4rvk.mount: Deactivated successfully. Oct 31 00:46:07.299188 systemd[1]: var-lib-kubelet-pods-3387b160\x2d8e0e\x2d4dce\x2d9d1c\x2d94a068df8ae3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 00:46:07.299360 kubelet[2117]: I1031 00:46:07.299326 2117 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3387b160-8e0e-4dce-9d1c-94a068df8ae3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3387b160-8e0e-4dce-9d1c-94a068df8ae3" (UID: "3387b160-8e0e-4dce-9d1c-94a068df8ae3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:46:07.388716 kubelet[2117]: I1031 00:46:07.388666 2117 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3387b160-8e0e-4dce-9d1c-94a068df8ae3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 00:46:07.388716 kubelet[2117]: I1031 00:46:07.388703 2117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b4rvk\" (UniqueName: \"kubernetes.io/projected/3387b160-8e0e-4dce-9d1c-94a068df8ae3-kube-api-access-b4rvk\") on node \"localhost\" DevicePath \"\"" Oct 31 00:46:07.388716 kubelet[2117]: I1031 00:46:07.388713 2117 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3387b160-8e0e-4dce-9d1c-94a068df8ae3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 00:46:07.534336 kubelet[2117]: E1031 00:46:07.533562 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:07.591257 kubelet[2117]: I1031 00:46:07.591137 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d66c4" podStartSLOduration=2.257549793 podStartE2EDuration="15.591119862s" podCreationTimestamp="2025-10-31 00:45:52 +0000 UTC" firstStartedPulling="2025-10-31 00:45:53.10007834 +0000 UTC m=+23.793166057" lastFinishedPulling="2025-10-31 00:46:06.433648409 +0000 UTC m=+37.126736126" observedRunningTime="2025-10-31 00:46:07.590996157 +0000 UTC m=+38.284083954" watchObservedRunningTime="2025-10-31 00:46:07.591119862 +0000 UTC m=+38.284207579" Oct 31 00:46:07.690839 kubelet[2117]: I1031 00:46:07.690798 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg2tk\" (UniqueName: \"kubernetes.io/projected/69426558-399f-4dbc-9939-230d74bb54fd-kube-api-access-wg2tk\") pod \"whisker-5687bd54d-fctt7\" (UID: \"69426558-399f-4dbc-9939-230d74bb54fd\") " pod="calico-system/whisker-5687bd54d-fctt7" Oct 31 00:46:07.691062 kubelet[2117]: I1031 00:46:07.691046 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/69426558-399f-4dbc-9939-230d74bb54fd-whisker-backend-key-pair\") pod \"whisker-5687bd54d-fctt7\" (UID: \"69426558-399f-4dbc-9939-230d74bb54fd\") " pod="calico-system/whisker-5687bd54d-fctt7" Oct 31 00:46:07.691155 kubelet[2117]: I1031 00:46:07.691143 2117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69426558-399f-4dbc-9939-230d74bb54fd-whisker-ca-bundle\") pod \"whisker-5687bd54d-fctt7\" (UID: \"69426558-399f-4dbc-9939-230d74bb54fd\") " pod="calico-system/whisker-5687bd54d-fctt7" Oct 31 00:46:07.906983 env[1321]: time="2025-10-31T00:46:07.906815865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5687bd54d-fctt7,Uid:69426558-399f-4dbc-9939-230d74bb54fd,Namespace:calico-system,Attempt:0,}" Oct 31 00:46:08.059602 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 00:46:08.059825 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia07a9ae7c73: link becomes ready Oct 31 00:46:08.057857 systemd-networkd[1103]: calia07a9ae7c73: Link UP Oct 31 00:46:08.059629 systemd-networkd[1103]: calia07a9ae7c73: Gained carrier Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:07.963 [INFO][3431] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:07.979 [INFO][3431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5687bd54d--fctt7-eth0 whisker-5687bd54d- calico-system 69426558-399f-4dbc-9939-230d74bb54fd 951 0 2025-10-31 00:46:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5687bd54d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5687bd54d-fctt7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia07a9ae7c73 [] [] }} ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Namespace="calico-system" Pod="whisker-5687bd54d-fctt7" WorkloadEndpoint="localhost-k8s-whisker--5687bd54d--fctt7-" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:07.979 [INFO][3431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Namespace="calico-system" Pod="whisker-5687bd54d-fctt7" WorkloadEndpoint="localhost-k8s-whisker--5687bd54d--fctt7-eth0" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.005 [INFO][3445] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" HandleID="k8s-pod-network.461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Workload="localhost-k8s-whisker--5687bd54d--fctt7-eth0" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.005 [INFO][3445] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" HandleID="k8s-pod-network.461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Workload="localhost-k8s-whisker--5687bd54d--fctt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004380d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5687bd54d-fctt7", "timestamp":"2025-10-31 00:46:08.005494418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.005 [INFO][3445] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.005 [INFO][3445] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.005 [INFO][3445] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.015 [INFO][3445] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" host="localhost" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.023 [INFO][3445] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.028 [INFO][3445] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.030 [INFO][3445] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.032 [INFO][3445] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.032 [INFO][3445] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" host="localhost" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.034 [INFO][3445] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49 Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.040 [INFO][3445] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" host="localhost" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.046 [INFO][3445] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" host="localhost" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.046 [INFO][3445] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" host="localhost" Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.046 [INFO][3445] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:08.073775 env[1321]: 2025-10-31 00:46:08.046 [INFO][3445] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" HandleID="k8s-pod-network.461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Workload="localhost-k8s-whisker--5687bd54d--fctt7-eth0" Oct 31 00:46:08.074355 env[1321]: 2025-10-31 00:46:08.049 [INFO][3431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Namespace="calico-system" Pod="whisker-5687bd54d-fctt7" WorkloadEndpoint="localhost-k8s-whisker--5687bd54d--fctt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5687bd54d--fctt7-eth0", GenerateName:"whisker-5687bd54d-", Namespace:"calico-system", SelfLink:"", UID:"69426558-399f-4dbc-9939-230d74bb54fd", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 46, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5687bd54d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5687bd54d-fctt7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia07a9ae7c73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:08.074355 env[1321]: 2025-10-31 00:46:08.049 [INFO][3431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Namespace="calico-system" Pod="whisker-5687bd54d-fctt7" WorkloadEndpoint="localhost-k8s-whisker--5687bd54d--fctt7-eth0" Oct 31 00:46:08.074355 env[1321]: 2025-10-31 00:46:08.049 [INFO][3431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia07a9ae7c73 ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Namespace="calico-system" Pod="whisker-5687bd54d-fctt7" WorkloadEndpoint="localhost-k8s-whisker--5687bd54d--fctt7-eth0" Oct 31 00:46:08.074355 env[1321]: 2025-10-31 00:46:08.059 [INFO][3431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Namespace="calico-system" Pod="whisker-5687bd54d-fctt7" WorkloadEndpoint="localhost-k8s-whisker--5687bd54d--fctt7-eth0" Oct 31 00:46:08.074355 env[1321]: 2025-10-31 00:46:08.059 [INFO][3431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Namespace="calico-system" Pod="whisker-5687bd54d-fctt7" WorkloadEndpoint="localhost-k8s-whisker--5687bd54d--fctt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5687bd54d--fctt7-eth0", GenerateName:"whisker-5687bd54d-", Namespace:"calico-system", SelfLink:"", UID:"69426558-399f-4dbc-9939-230d74bb54fd", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 46, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5687bd54d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49", Pod:"whisker-5687bd54d-fctt7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia07a9ae7c73", MAC:"56:93:e9:a1:b6:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:08.074355 env[1321]: 2025-10-31 00:46:08.070 [INFO][3431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49" Namespace="calico-system" Pod="whisker-5687bd54d-fctt7" WorkloadEndpoint="localhost-k8s-whisker--5687bd54d--fctt7-eth0" Oct 31 00:46:08.086434 env[1321]: time="2025-10-31T00:46:08.086347196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:46:08.086434 env[1321]: time="2025-10-31T00:46:08.086386524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:46:08.086434 env[1321]: time="2025-10-31T00:46:08.086398086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:46:08.086621 env[1321]: time="2025-10-31T00:46:08.086540555Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49 pid=3470 runtime=io.containerd.runc.v2 Oct 31 00:46:08.091949 kubelet[2117]: I1031 00:46:08.091860 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:46:08.092366 kubelet[2117]: E1031 00:46:08.092279 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:08.135406 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:46:08.157328 kernel: kauditd_printk_skb: 20 callbacks suppressed Oct 31 00:46:08.157735 kernel: audit: type=1325 audit(1761871568.151:297): table=filter:103 family=2 entries=21 op=nft_register_rule pid=3505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:08.151000 audit[3505]: NETFILTER_CFG table=filter:103 family=2 entries=21 op=nft_register_rule pid=3505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:08.151000 audit[3505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd2b24630 a2=0 a3=1 items=0 ppid=2228 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.162587 kernel: audit: type=1300 audit(1761871568.151:297): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd2b24630 a2=0 a3=1 items=0 ppid=2228 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.163234 kernel: audit: type=1327 audit(1761871568.151:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:08.151000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:08.165079 env[1321]: time="2025-10-31T00:46:08.165040589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5687bd54d-fctt7,Uid:69426558-399f-4dbc-9939-230d74bb54fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"461fd0245143cf01cb9825fe0b6d499eca1e06dc0547b731762874731c68fc49\"" Oct 31 00:46:08.165292 kernel: audit: type=1325 audit(1761871568.156:298): table=nat:104 family=2 entries=19 op=nft_register_chain pid=3505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:08.156000 audit[3505]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=3505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:08.156000 audit[3505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd2b24630 a2=0 a3=1 items=0 ppid=2228 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.167339 env[1321]: time="2025-10-31T00:46:08.167304475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:46:08.171404 kernel: audit: type=1300 audit(1761871568.156:298): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd2b24630 a2=0 a3=1 items=0 ppid=2228 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.171501 kernel: audit: type=1327 audit(1761871568.156:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:08.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:08.353000 audit[3537]: AVC avc: denied { write } for pid=3537 comm="tee" name="fd" dev="proc" ino=21529 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 00:46:08.353000 audit[3537]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc7c8b7e2 a2=241 a3=1b6 items=1 ppid=3514 pid=3537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.361899 kernel: audit: type=1400 audit(1761871568.353:299): avc: denied { write } for pid=3537 comm="tee" name="fd" dev="proc" ino=21529 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 00:46:08.362049 kernel: audit: type=1300 audit(1761871568.353:299): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc7c8b7e2 a2=241 a3=1b6 items=1 ppid=3514 pid=3537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.362075 kernel: audit: type=1307 audit(1761871568.353:299): cwd="/etc/service/enabled/node-status-reporter/log" Oct 31 00:46:08.353000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Oct 31 00:46:08.353000 audit: PATH item=0 name="/dev/fd/63" inode=19176 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:46:08.366048 kernel: audit: type=1302 audit(1761871568.353:299): item=0 name="/dev/fd/63" inode=19176 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:46:08.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 00:46:08.357000 audit[3548]: AVC avc: denied { write } for pid=3548 comm="tee" name="fd" dev="proc" ino=20644 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 00:46:08.357000 audit[3548]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffd3187f3 a2=241 a3=1b6 items=1 ppid=3515 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.357000 audit: CWD cwd="/etc/service/enabled/cni/log" Oct 31 00:46:08.357000 audit: PATH item=0 name="/dev/fd/63" inode=19624 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:46:08.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 00:46:08.373965 env[1321]: time="2025-10-31T00:46:08.373915403Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:08.378150 env[1321]: time="2025-10-31T00:46:08.378047138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:46:08.378588 kubelet[2117]: E1031 00:46:08.378535 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:46:08.378963 kubelet[2117]: E1031 00:46:08.378602 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:46:08.379135 kubelet[2117]: E1031 00:46:08.379080 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed710fbf9c8d49d2a72c1ed130a86450,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wg2tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5687bd54d-fctt7_calico-system(69426558-399f-4dbc-9939-230d74bb54fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:08.383024 env[1321]: time="2025-10-31T00:46:08.382963067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:46:08.405000 audit[3586]: AVC avc: denied { write } for pid=3586 comm="tee" name="fd" dev="proc" ino=20656 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 00:46:08.405000 audit[3586]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffee46d7f1 a2=241 a3=1b6 items=1 ppid=3530 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.405000 audit: CWD cwd="/etc/service/enabled/bird6/log" Oct 31 00:46:08.405000 audit: PATH item=0 name="/dev/fd/63" inode=20653 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:46:08.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 00:46:08.407000 audit[3583]: AVC avc: denied { write } for pid=3583 comm="tee" name="fd" dev="proc" ino=21544 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 00:46:08.407000 audit[3583]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc56c07f1 a2=241 a3=1b6 items=1 ppid=3526 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.407000 audit: CWD cwd="/etc/service/enabled/confd/log" Oct 31 00:46:08.407000 audit: PATH item=0 name="/dev/fd/63" inode=19195 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:46:08.407000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 00:46:08.414000 audit[3572]: AVC avc: denied { write } for pid=3572 comm="tee" name="fd" dev="proc" ino=20662 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 00:46:08.414000 audit[3572]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff95277e1 a2=241 a3=1b6 items=1 ppid=3520 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.414000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Oct 31 00:46:08.414000 audit: PATH item=0 name="/dev/fd/63" inode=20650 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:46:08.414000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 00:46:08.417000 audit[3593]: AVC avc: denied { write } for pid=3593 comm="tee" name="fd" dev="proc" ino=20666 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 00:46:08.417000 audit[3593]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff858f7f1 a2=241 a3=1b6 items=1 ppid=3525 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.417000 audit: CWD cwd="/etc/service/enabled/felix/log" Oct 31 00:46:08.417000 audit: PATH item=0 name="/dev/fd/63" inode=19198 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:46:08.417000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 00:46:08.462000 audit[3597]: AVC avc: denied { write } for pid=3597 comm="tee" name="fd" dev="proc" ino=21550 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 00:46:08.462000 audit[3597]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffeaa817f2 a2=241 a3=1b6 items=1 ppid=3516 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.462000 audit: CWD cwd="/etc/service/enabled/bird/log" Oct 31 00:46:08.462000 audit: PATH item=0 name="/dev/fd/63" inode=20668 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 00:46:08.462000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 00:46:08.542077 kubelet[2117]: I1031 00:46:08.542044 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:46:08.542502 kubelet[2117]: E1031 00:46:08.542219 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:08.542743 kubelet[2117]: E1031 00:46:08.542722 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:08.582000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.582000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.582000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.582000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.582000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.582000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.582000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.582000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.582000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.582000 audit: BPF prog-id=10 op=LOAD Oct 31 00:46:08.582000 audit[3630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffbec06e8 a2=98 a3=fffffbec06d8 items=0 ppid=3527 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.582000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 00:46:08.583000 audit: BPF prog-id=10 op=UNLOAD Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit: BPF prog-id=11 op=LOAD Oct 31 00:46:08.583000 audit[3630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffbec0598 a2=74 a3=95 items=0 ppid=3527 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.583000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 00:46:08.583000 audit: BPF prog-id=11 op=UNLOAD Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit: BPF prog-id=12 op=LOAD Oct 31 00:46:08.583000 audit[3630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffbec05c8 a2=40 a3=fffffbec05f8 items=0 ppid=3527 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.583000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 00:46:08.583000 audit: BPF prog-id=12 op=UNLOAD Oct 31 00:46:08.583000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.583000 audit[3630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=fffffbec06e0 a2=50 a3=0 items=0 ppid=3527 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.583000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit: BPF prog-id=13 op=LOAD Oct 31 00:46:08.584000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe07152d8 a2=98 a3=ffffe07152c8 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.584000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.584000 audit: BPF prog-id=13 op=UNLOAD Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit: BPF prog-id=14 op=LOAD Oct 31 00:46:08.584000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe0714f68 a2=74 a3=95 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.584000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.584000 audit: BPF prog-id=14 op=UNLOAD Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.584000 audit: BPF prog-id=15 op=LOAD Oct 31 00:46:08.584000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe0714fc8 a2=94 a3=2 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.584000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.584000 audit: BPF prog-id=15 op=UNLOAD Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit: BPF prog-id=16 op=LOAD Oct 31 00:46:08.676000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe0714f88 a2=40 a3=ffffe0714fb8 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.676000 audit: BPF prog-id=16 op=UNLOAD Oct 31 00:46:08.676000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.676000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffe07150a0 a2=50 a3=0 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe0714ff8 a2=28 a3=ffffe0715128 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe0715028 a2=28 a3=ffffe0715158 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe0714ed8 a2=28 a3=ffffe0715008 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe0715048 a2=28 a3=ffffe0715178 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe0715028 a2=28 a3=ffffe0715158 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe0715018 a2=28 a3=ffffe0715148 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe0715048 a2=28 a3=ffffe0715178 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe0715028 a2=28 a3=ffffe0715158 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe0715048 a2=28 a3=ffffe0715178 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe0715018 a2=28 a3=ffffe0715148 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe0715098 a2=28 a3=ffffe07151d8 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffe0714dd0 a2=50 a3=0 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit: BPF prog-id=17 op=LOAD Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe0714dd8 a2=94 a3=5 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit: BPF prog-id=17 op=UNLOAD Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffe0714ee0 a2=50 a3=0 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffe0715028 a2=4 a3=3 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.685000 audit[3631]: AVC avc: denied { confidentiality } for pid=3631 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 00:46:08.685000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffe0715008 a2=94 a3=6 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { confidentiality } for pid=3631 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 00:46:08.686000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffe07147d8 a2=94 a3=83 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.686000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { perfmon } for pid=3631 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { bpf } for pid=3631 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.686000 audit[3631]: AVC avc: denied { confidentiality } for pid=3631 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 00:46:08.686000 audit[3631]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffe07147d8 a2=94 a3=83 items=0 ppid=3527 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.686000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 00:46:08.689064 env[1321]: time="2025-10-31T00:46:08.689001594Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:08.689961 env[1321]: time="2025-10-31T00:46:08.689912334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:46:08.690188 kubelet[2117]: E1031 00:46:08.690138 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:46:08.690249 kubelet[2117]: E1031 00:46:08.690202 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:46:08.690385 kubelet[2117]: E1031 00:46:08.690343 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wg2tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5687bd54d-fctt7_calico-system(69426558-399f-4dbc-9939-230d74bb54fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:08.691525 kubelet[2117]: E1031 00:46:08.691475 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5687bd54d-fctt7" podUID="69426558-399f-4dbc-9939-230d74bb54fd" Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit: BPF prog-id=18 op=LOAD Oct 31 00:46:08.697000 audit[3634]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe5e0dd98 a2=98 a3=ffffe5e0dd88 items=0 ppid=3527 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 00:46:08.697000 audit: BPF prog-id=18 op=UNLOAD Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit: BPF prog-id=19 op=LOAD Oct 31 00:46:08.697000 audit[3634]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe5e0dc48 a2=74 a3=95 items=0 ppid=3527 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 00:46:08.697000 audit: BPF prog-id=19 op=UNLOAD Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.697000 audit: BPF prog-id=20 op=LOAD Oct 31 00:46:08.697000 audit[3634]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe5e0dc78 a2=40 a3=ffffe5e0dca8 items=0 ppid=3527 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 00:46:08.697000 audit: BPF prog-id=20 op=UNLOAD Oct 31 00:46:08.753122 systemd-networkd[1103]: vxlan.calico: Link UP Oct 31 00:46:08.753128 systemd-networkd[1103]: vxlan.calico: Gained carrier Oct 31 00:46:08.761000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.761000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.761000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.761000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.761000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.761000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.761000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.761000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.761000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.761000 audit: BPF prog-id=21 op=LOAD Oct 31 00:46:08.761000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd775308 a2=98 a3=ffffcd7752f8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit: BPF prog-id=21 op=UNLOAD Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit: BPF prog-id=22 op=LOAD Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd774fe8 a2=74 a3=95 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit: BPF prog-id=22 op=UNLOAD Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit: BPF prog-id=23 op=LOAD Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd775048 a2=94 a3=2 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit: BPF prog-id=23 op=UNLOAD Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd775078 a2=28 a3=ffffcd7751a8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd7750a8 a2=28 a3=ffffcd7751d8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd774f58 a2=28 a3=ffffcd775088 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd7750c8 a2=28 a3=ffffcd7751f8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd7750a8 a2=28 a3=ffffcd7751d8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd775098 a2=28 a3=ffffcd7751c8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd7750c8 a2=28 a3=ffffcd7751f8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd7750a8 a2=28 a3=ffffcd7751d8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd7750c8 a2=28 a3=ffffcd7751f8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd775098 a2=28 a3=ffffcd7751c8 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd775118 a2=28 a3=ffffcd775258 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit: BPF prog-id=24 op=LOAD Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd774f38 a2=40 a3=ffffcd774f68 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.762000 audit: BPF prog-id=24 op=UNLOAD Oct 31 00:46:08.762000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.762000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffcd774f60 a2=50 a3=0 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffcd774f60 a2=50 a3=0 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.763000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit: BPF prog-id=25 op=LOAD Oct 31 00:46:08.763000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd7746c8 a2=94 a3=2 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.763000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.763000 audit: BPF prog-id=25 op=UNLOAD Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { perfmon } for pid=3663 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit[3663]: AVC avc: denied { bpf } for pid=3663 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.763000 audit: BPF prog-id=26 op=LOAD Oct 31 00:46:08.763000 audit[3663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd774858 a2=94 a3=30 items=0 ppid=3527 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.763000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit: BPF prog-id=27 op=LOAD Oct 31 00:46:08.766000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9d80f48 a2=98 a3=ffffc9d80f38 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.766000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.766000 audit: BPF prog-id=27 op=UNLOAD Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit: BPF prog-id=28 op=LOAD Oct 31 00:46:08.766000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9d80bd8 a2=74 a3=95 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.766000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.766000 audit: BPF prog-id=28 op=UNLOAD Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.766000 audit: BPF prog-id=29 op=LOAD Oct 31 00:46:08.766000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9d80c38 a2=94 a3=2 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.766000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.767000 audit: BPF prog-id=29 op=UNLOAD Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit: BPF prog-id=30 op=LOAD Oct 31 00:46:08.861000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9d80bf8 a2=40 a3=ffffc9d80c28 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.861000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.861000 audit: BPF prog-id=30 op=UNLOAD Oct 31 00:46:08.861000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.861000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc9d80d10 a2=50 a3=0 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.861000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9d80c68 a2=28 a3=ffffc9d80d98 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9d80c98 a2=28 a3=ffffc9d80dc8 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9d80b48 a2=28 a3=ffffc9d80c78 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9d80cb8 a2=28 a3=ffffc9d80de8 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9d80c98 a2=28 a3=ffffc9d80dc8 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9d80c88 a2=28 a3=ffffc9d80db8 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9d80cb8 a2=28 a3=ffffc9d80de8 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9d80c98 a2=28 a3=ffffc9d80dc8 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9d80cb8 a2=28 a3=ffffc9d80de8 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9d80c88 a2=28 a3=ffffc9d80db8 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9d80d08 a2=28 a3=ffffc9d80e48 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc9d80a40 a2=50 a3=0 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit: BPF prog-id=31 op=LOAD Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc9d80a48 a2=94 a3=5 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit: BPF prog-id=31 op=UNLOAD Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc9d80b50 a2=50 a3=0 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc9d80c98 a2=4 a3=3 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.870000 audit[3665]: AVC avc: denied { confidentiality } for pid=3665 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 00:46:08.870000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc9d80c78 a2=94 a3=6 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { confidentiality } for pid=3665 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 00:46:08.871000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc9d80448 a2=94 a3=83 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { perfmon } for pid=3665 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { confidentiality } for pid=3665 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 00:46:08.871000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc9d80448 a2=94 a3=83 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc9d81e88 a2=10 a3=ffffc9d81f78 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc9d81d48 a2=10 a3=ffffc9d81e38 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc9d81cb8 a2=10 a3=ffffc9d81e38 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.871000 audit[3665]: AVC avc: denied { bpf } for pid=3665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 00:46:08.871000 audit[3665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc9d81cb8 a2=10 a3=ffffc9d81e38 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 00:46:08.877000 audit: BPF prog-id=26 op=UNLOAD Oct 31 00:46:08.922000 audit[3691]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=3691 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:08.922000 audit[3691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffca4c4320 a2=0 a3=ffffa5e03fa8 items=0 ppid=3527 pid=3691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.922000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:08.930000 audit[3693]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=3693 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:08.930000 audit[3693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffcf1960e0 a2=0 a3=ffff9c251fa8 items=0 ppid=3527 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.930000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:08.947000 audit[3692]: NETFILTER_CFG table=raw:107 family=2 entries=21 op=nft_register_chain pid=3692 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:08.947000 audit[3692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe12fd760 a2=0 a3=ffff93174fa8 items=0 ppid=3527 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.947000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:08.948000 audit[3695]: NETFILTER_CFG table=filter:108 family=2 entries=94 op=nft_register_chain pid=3695 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:08.948000 audit[3695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe74fd1a0 a2=0 a3=ffff89935fa8 items=0 ppid=3527 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:08.948000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:09.424688 kubelet[2117]: I1031 00:46:09.424463 2117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3387b160-8e0e-4dce-9d1c-94a068df8ae3" path="/var/lib/kubelet/pods/3387b160-8e0e-4dce-9d1c-94a068df8ae3/volumes" Oct 31 00:46:09.452646 systemd-networkd[1103]: calia07a9ae7c73: Gained IPv6LL Oct 31 00:46:09.545401 kubelet[2117]: E1031 00:46:09.545344 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5687bd54d-fctt7" podUID="69426558-399f-4dbc-9939-230d74bb54fd" Oct 31 00:46:09.567000 audit[3707]: NETFILTER_CFG table=filter:109 family=2 entries=20 op=nft_register_rule pid=3707 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:09.567000 audit[3707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeb2aa3b0 a2=0 a3=1 items=0 ppid=2228 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:09.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:09.576000 audit[3707]: NETFILTER_CFG table=nat:110 family=2 entries=14 op=nft_register_rule pid=3707 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:09.576000 audit[3707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffeb2aa3b0 a2=0 a3=1 items=0 ppid=2228 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:09.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:09.977135 kubelet[2117]: I1031 00:46:09.977084 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:46:09.977577 kubelet[2117]: E1031 00:46:09.977555 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:10.220602 systemd-networkd[1103]: vxlan.calico: Gained IPv6LL Oct 31 00:46:12.423190 env[1321]: time="2025-10-31T00:46:12.423147181Z" level=info msg="StopPodSandbox for \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\"" Oct 31 00:46:12.424173 env[1321]: time="2025-10-31T00:46:12.424139478Z" level=info msg="StopPodSandbox for \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\"" Oct 31 00:46:12.424732 env[1321]: time="2025-10-31T00:46:12.424318670Z" level=info msg="StopPodSandbox for \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\"" Oct 31 00:46:12.424732 env[1321]: time="2025-10-31T00:46:12.424495182Z" level=info msg="StopPodSandbox for \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\"" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.508 [INFO][3791] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.508 [INFO][3791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" iface="eth0" netns="/var/run/netns/cni-ae6796a1-cea9-27f9-52e6-044f198c53be" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.508 [INFO][3791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" iface="eth0" netns="/var/run/netns/cni-ae6796a1-cea9-27f9-52e6-044f198c53be" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.509 [INFO][3791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" iface="eth0" netns="/var/run/netns/cni-ae6796a1-cea9-27f9-52e6-044f198c53be" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.509 [INFO][3791] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.509 [INFO][3791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.544 [INFO][3835] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" HandleID="k8s-pod-network.042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.544 [INFO][3835] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.545 [INFO][3835] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.557 [WARNING][3835] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" HandleID="k8s-pod-network.042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.557 [INFO][3835] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" HandleID="k8s-pod-network.042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.559 [INFO][3835] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:12.567603 env[1321]: 2025-10-31 00:46:12.562 [INFO][3791] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:12.571360 env[1321]: time="2025-10-31T00:46:12.571017985Z" level=info msg="TearDown network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\" successfully" Oct 31 00:46:12.571360 env[1321]: time="2025-10-31T00:46:12.571130485Z" level=info msg="StopPodSandbox for \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\" returns successfully" Oct 31 00:46:12.570037 systemd[1]: run-netns-cni\x2dae6796a1\x2dcea9\x2d27f9\x2d52e6\x2d044f198c53be.mount: Deactivated successfully. Oct 31 00:46:12.572069 kubelet[2117]: E1031 00:46:12.572041 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:12.572800 env[1321]: time="2025-10-31T00:46:12.572767458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rrkhq,Uid:09069f0b-a951-47c9-a38b-43b3cfe8a3b6,Namespace:kube-system,Attempt:1,}" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.526 [INFO][3814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.526 [INFO][3814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" iface="eth0" netns="/var/run/netns/cni-31ecf967-ab42-1b2a-1219-a9ab294f2ecc" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.526 [INFO][3814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" iface="eth0" netns="/var/run/netns/cni-31ecf967-ab42-1b2a-1219-a9ab294f2ecc" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.527 [INFO][3814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" iface="eth0" netns="/var/run/netns/cni-31ecf967-ab42-1b2a-1219-a9ab294f2ecc" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.527 [INFO][3814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.527 [INFO][3814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.551 [INFO][3849] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" HandleID="k8s-pod-network.314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.551 [INFO][3849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.559 [INFO][3849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.571 [WARNING][3849] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" HandleID="k8s-pod-network.314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.571 [INFO][3849] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" HandleID="k8s-pod-network.314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.576 [INFO][3849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:12.579845 env[1321]: 2025-10-31 00:46:12.577 [INFO][3814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:12.585259 env[1321]: time="2025-10-31T00:46:12.580771610Z" level=info msg="TearDown network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\" successfully" Oct 31 00:46:12.585259 env[1321]: time="2025-10-31T00:46:12.580867827Z" level=info msg="StopPodSandbox for \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\" returns successfully" Oct 31 00:46:12.585259 env[1321]: time="2025-10-31T00:46:12.583814114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tzh9k,Uid:542c9f03-90da-4571-a183-2191a31bfb63,Namespace:calico-system,Attempt:1,}" Oct 31 00:46:12.582217 systemd[1]: run-netns-cni\x2d31ecf967\x2dab42\x2d1b2a\x2d1219\x2da9ab294f2ecc.mount: Deactivated successfully. Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.505 [INFO][3808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.505 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" iface="eth0" netns="/var/run/netns/cni-495b897b-d37c-bb92-758a-644daa3724ec" Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.506 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" iface="eth0" netns="/var/run/netns/cni-495b897b-d37c-bb92-758a-644daa3724ec" Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.506 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" iface="eth0" netns="/var/run/netns/cni-495b897b-d37c-bb92-758a-644daa3724ec" Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.506 [INFO][3808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.506 [INFO][3808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.557 [INFO][3831] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" HandleID="k8s-pod-network.6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.558 [INFO][3831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.576 [INFO][3831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.592 [WARNING][3831] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" HandleID="k8s-pod-network.6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.592 [INFO][3831] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" HandleID="k8s-pod-network.6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.594 [INFO][3831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:12.614146 env[1321]: 2025-10-31 00:46:12.602 [INFO][3808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:12.614146 env[1321]: time="2025-10-31T00:46:12.605661261Z" level=info msg="TearDown network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\" successfully" Oct 31 00:46:12.614146 env[1321]: time="2025-10-31T00:46:12.605698988Z" level=info msg="StopPodSandbox for \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\" returns successfully" Oct 31 00:46:12.609722 systemd[1]: run-netns-cni\x2d495b897b\x2dd37c\x2dbb92\x2d758a\x2d644daa3724ec.mount: Deactivated successfully. Oct 31 00:46:12.615678 env[1321]: time="2025-10-31T00:46:12.615151158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94987b775-7bbdc,Uid:68befc49-9413-4be7-9089-5bb6c17bda13,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.507 [INFO][3800] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.508 [INFO][3800] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" iface="eth0" netns="/var/run/netns/cni-a064c838-47cd-ee79-886f-495e14fc3148" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.508 [INFO][3800] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" iface="eth0" netns="/var/run/netns/cni-a064c838-47cd-ee79-886f-495e14fc3148" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.508 [INFO][3800] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" iface="eth0" netns="/var/run/netns/cni-a064c838-47cd-ee79-886f-495e14fc3148" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.508 [INFO][3800] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.508 [INFO][3800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.565 [INFO][3834] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" HandleID="k8s-pod-network.785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.565 [INFO][3834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.594 [INFO][3834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.605 [WARNING][3834] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" HandleID="k8s-pod-network.785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.606 [INFO][3834] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" HandleID="k8s-pod-network.785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.611 [INFO][3834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:12.617635 env[1321]: 2025-10-31 00:46:12.615 [INFO][3800] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:12.618129 env[1321]: time="2025-10-31T00:46:12.617767546Z" level=info msg="TearDown network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\" successfully" Oct 31 00:46:12.618129 env[1321]: time="2025-10-31T00:46:12.617800832Z" level=info msg="StopPodSandbox for \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\" returns successfully" Oct 31 00:46:12.618996 env[1321]: time="2025-10-31T00:46:12.618954318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94987b775-fhccb,Uid:07e12617-5c5d-4e42-9bef-37ca707707aa,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:46:12.756640 systemd-networkd[1103]: calif6f07bd2f92: Link UP Oct 31 00:46:12.758549 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 00:46:12.758750 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif6f07bd2f92: link becomes ready Oct 31 00:46:12.758818 systemd-networkd[1103]: calif6f07bd2f92: Gained carrier Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.655 [INFO][3865] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0 coredns-668d6bf9bc- kube-system 09069f0b-a951-47c9-a38b-43b3cfe8a3b6 998 0 2025-10-31 00:45:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rrkhq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif6f07bd2f92 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Namespace="kube-system" Pod="coredns-668d6bf9bc-rrkhq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rrkhq-" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.655 [INFO][3865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Namespace="kube-system" Pod="coredns-668d6bf9bc-rrkhq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.700 [INFO][3916] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" HandleID="k8s-pod-network.3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.701 [INFO][3916] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" HandleID="k8s-pod-network.3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rrkhq", "timestamp":"2025-10-31 00:46:12.700688855 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.701 [INFO][3916] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.701 [INFO][3916] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.701 [INFO][3916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.713 [INFO][3916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" host="localhost" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.721 [INFO][3916] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.727 [INFO][3916] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.730 [INFO][3916] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.732 [INFO][3916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.732 [INFO][3916] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" host="localhost" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.736 [INFO][3916] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.742 [INFO][3916] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" host="localhost" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.748 [INFO][3916] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" host="localhost" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.748 [INFO][3916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" host="localhost" Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.748 [INFO][3916] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:12.774407 env[1321]: 2025-10-31 00:46:12.748 [INFO][3916] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" HandleID="k8s-pod-network.3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.775083 env[1321]: 2025-10-31 00:46:12.753 [INFO][3865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Namespace="kube-system" Pod="coredns-668d6bf9bc-rrkhq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"09069f0b-a951-47c9-a38b-43b3cfe8a3b6", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rrkhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6f07bd2f92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:12.775083 env[1321]: 2025-10-31 00:46:12.754 [INFO][3865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Namespace="kube-system" Pod="coredns-668d6bf9bc-rrkhq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.775083 env[1321]: 2025-10-31 00:46:12.754 [INFO][3865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6f07bd2f92 ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Namespace="kube-system" Pod="coredns-668d6bf9bc-rrkhq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.775083 env[1321]: 2025-10-31 00:46:12.758 [INFO][3865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Namespace="kube-system" Pod="coredns-668d6bf9bc-rrkhq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.775083 env[1321]: 2025-10-31 00:46:12.759 [INFO][3865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Namespace="kube-system" Pod="coredns-668d6bf9bc-rrkhq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"09069f0b-a951-47c9-a38b-43b3cfe8a3b6", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a", Pod:"coredns-668d6bf9bc-rrkhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6f07bd2f92", MAC:"f6:d3:79:fd:1f:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:12.775083 env[1321]: 2025-10-31 00:46:12.772 [INFO][3865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a" Namespace="kube-system" Pod="coredns-668d6bf9bc-rrkhq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:12.782000 audit[3966]: NETFILTER_CFG table=filter:111 family=2 entries=42 op=nft_register_chain pid=3966 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:12.782000 audit[3966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffc7a09430 a2=0 a3=ffff8745cfa8 items=0 ppid=3527 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:12.782000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:12.793739 env[1321]: time="2025-10-31T00:46:12.793530059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:46:12.793739 env[1321]: time="2025-10-31T00:46:12.793578147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:46:12.793739 env[1321]: time="2025-10-31T00:46:12.793596711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:46:12.793950 env[1321]: time="2025-10-31T00:46:12.793741617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a pid=3974 runtime=io.containerd.runc.v2 Oct 31 00:46:12.823230 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:46:12.848565 env[1321]: time="2025-10-31T00:46:12.848514812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rrkhq,Uid:09069f0b-a951-47c9-a38b-43b3cfe8a3b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a\"" Oct 31 00:46:12.849323 kubelet[2117]: E1031 00:46:12.849251 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:12.853296 env[1321]: time="2025-10-31T00:46:12.853240537Z" level=info msg="CreateContainer within sandbox \"3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:46:12.862444 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali39625c0411a: link becomes ready Oct 31 00:46:12.862636 systemd-networkd[1103]: cali39625c0411a: Link UP Oct 31 00:46:12.862835 systemd-networkd[1103]: cali39625c0411a: Gained carrier Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.675 [INFO][3863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--tzh9k-eth0 goldmane-666569f655- calico-system 542c9f03-90da-4571-a183-2191a31bfb63 999 0 2025-10-31 00:45:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-tzh9k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali39625c0411a [] [] }} ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Namespace="calico-system" Pod="goldmane-666569f655-tzh9k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tzh9k-" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.677 [INFO][3863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Namespace="calico-system" Pod="goldmane-666569f655-tzh9k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.720 [INFO][3926] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" HandleID="k8s-pod-network.a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.720 [INFO][3926] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" HandleID="k8s-pod-network.a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136440), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-tzh9k", "timestamp":"2025-10-31 00:46:12.720070682 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.720 [INFO][3926] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.748 [INFO][3926] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.748 [INFO][3926] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.815 [INFO][3926] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" host="localhost" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.828 [INFO][3926] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.834 [INFO][3926] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.836 [INFO][3926] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.839 [INFO][3926] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.839 [INFO][3926] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" host="localhost" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.841 [INFO][3926] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29 Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.845 [INFO][3926] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" host="localhost" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.854 [INFO][3926] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" host="localhost" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.854 [INFO][3926] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" host="localhost" Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.854 [INFO][3926] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:12.878061 env[1321]: 2025-10-31 00:46:12.854 [INFO][3926] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" HandleID="k8s-pod-network.a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.878703 env[1321]: 2025-10-31 00:46:12.858 [INFO][3863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Namespace="calico-system" Pod="goldmane-666569f655-tzh9k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tzh9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tzh9k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"542c9f03-90da-4571-a183-2191a31bfb63", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-tzh9k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39625c0411a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:12.878703 env[1321]: 2025-10-31 00:46:12.858 [INFO][3863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Namespace="calico-system" Pod="goldmane-666569f655-tzh9k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.878703 env[1321]: 2025-10-31 00:46:12.858 [INFO][3863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39625c0411a ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Namespace="calico-system" Pod="goldmane-666569f655-tzh9k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.878703 env[1321]: 2025-10-31 00:46:12.862 [INFO][3863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Namespace="calico-system" Pod="goldmane-666569f655-tzh9k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.878703 env[1321]: 2025-10-31 00:46:12.863 [INFO][3863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Namespace="calico-system" Pod="goldmane-666569f655-tzh9k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tzh9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tzh9k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"542c9f03-90da-4571-a183-2191a31bfb63", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29", Pod:"goldmane-666569f655-tzh9k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39625c0411a", MAC:"c6:3a:c0:c0:8e:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:12.878703 env[1321]: 2025-10-31 00:46:12.875 [INFO][3863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29" Namespace="calico-system" Pod="goldmane-666569f655-tzh9k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:12.884758 env[1321]: time="2025-10-31T00:46:12.884691882Z" level=info msg="CreateContainer within sandbox \"3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d74e10eec4603001212201d89b80c9f8021aa8e4612e6eebd9eb13dfe615eca\"" Oct 31 00:46:12.885811 env[1321]: time="2025-10-31T00:46:12.885555316Z" level=info msg="StartContainer for \"5d74e10eec4603001212201d89b80c9f8021aa8e4612e6eebd9eb13dfe615eca\"" Oct 31 00:46:12.886000 audit[4019]: NETFILTER_CFG table=filter:112 family=2 entries=48 op=nft_register_chain pid=4019 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:12.886000 audit[4019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26368 a0=3 a1=ffffcb8271f0 a2=0 a3=ffff9be3afa8 items=0 ppid=3527 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:12.886000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:12.896097 env[1321]: time="2025-10-31T00:46:12.896004465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:46:12.896231 env[1321]: time="2025-10-31T00:46:12.896100202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:46:12.896231 env[1321]: time="2025-10-31T00:46:12.896127287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:46:12.896390 env[1321]: time="2025-10-31T00:46:12.896345526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29 pid=4036 runtime=io.containerd.runc.v2 Oct 31 00:46:12.938404 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:46:12.964926 env[1321]: time="2025-10-31T00:46:12.964875382Z" level=info msg="StartContainer for \"5d74e10eec4603001212201d89b80c9f8021aa8e4612e6eebd9eb13dfe615eca\" returns successfully" Oct 31 00:46:12.973473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic50cda0c9b6: link becomes ready Oct 31 00:46:12.973809 systemd-networkd[1103]: calic50cda0c9b6: Link UP Oct 31 00:46:12.974621 systemd-networkd[1103]: calic50cda0c9b6: Gained carrier Oct 31 00:46:12.987314 env[1321]: time="2025-10-31T00:46:12.987088034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tzh9k,Uid:542c9f03-90da-4571-a183-2191a31bfb63,Namespace:calico-system,Attempt:1,} returns sandbox id \"a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29\"" Oct 31 00:46:12.991018 env[1321]: time="2025-10-31T00:46:12.990982971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.699 [INFO][3900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--94987b775--fhccb-eth0 calico-apiserver-94987b775- calico-apiserver 07e12617-5c5d-4e42-9bef-37ca707707aa 996 0 2025-10-31 00:45:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94987b775 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-94987b775-fhccb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic50cda0c9b6 [] [] }} ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-fhccb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--fhccb-" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.699 [INFO][3900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-fhccb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.762 [INFO][3938] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" HandleID="k8s-pod-network.79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.762 [INFO][3938] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" HandleID="k8s-pod-network.79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-94987b775-fhccb", "timestamp":"2025-10-31 00:46:12.762059671 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.762 [INFO][3938] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.854 [INFO][3938] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.854 [INFO][3938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.915 [INFO][3938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" host="localhost" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.929 [INFO][3938] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.934 [INFO][3938] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.941 [INFO][3938] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.944 [INFO][3938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.944 [INFO][3938] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" host="localhost" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.946 [INFO][3938] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.953 [INFO][3938] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" host="localhost" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.963 [INFO][3938] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" host="localhost" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.963 [INFO][3938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" host="localhost" Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.963 [INFO][3938] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:12.993864 env[1321]: 2025-10-31 00:46:12.963 [INFO][3938] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" HandleID="k8s-pod-network.79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:12.994492 env[1321]: 2025-10-31 00:46:12.968 [INFO][3900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-fhccb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94987b775--fhccb-eth0", GenerateName:"calico-apiserver-94987b775-", Namespace:"calico-apiserver", SelfLink:"", UID:"07e12617-5c5d-4e42-9bef-37ca707707aa", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94987b775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-94987b775-fhccb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic50cda0c9b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:12.994492 env[1321]: 2025-10-31 00:46:12.968 [INFO][3900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-fhccb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:12.994492 env[1321]: 2025-10-31 00:46:12.968 [INFO][3900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic50cda0c9b6 ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-fhccb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:12.994492 env[1321]: 2025-10-31 00:46:12.972 [INFO][3900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-fhccb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:12.994492 env[1321]: 2025-10-31 00:46:12.977 [INFO][3900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-fhccb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94987b775--fhccb-eth0", GenerateName:"calico-apiserver-94987b775-", Namespace:"calico-apiserver", SelfLink:"", UID:"07e12617-5c5d-4e42-9bef-37ca707707aa", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94987b775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd", Pod:"calico-apiserver-94987b775-fhccb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic50cda0c9b6", MAC:"1a:d6:39:54:ce:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:12.994492 env[1321]: 2025-10-31 00:46:12.988 [INFO][3900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-fhccb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:13.004000 audit[4108]: NETFILTER_CFG table=filter:113 family=2 entries=58 op=nft_register_chain pid=4108 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:13.004000 audit[4108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30584 a0=3 a1=fffffb13bf00 a2=0 a3=ffffb405ffa8 items=0 ppid=3527 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.004000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:13.008017 env[1321]: time="2025-10-31T00:46:13.007887325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:46:13.008157 env[1321]: time="2025-10-31T00:46:13.007936254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:46:13.008157 env[1321]: time="2025-10-31T00:46:13.007969499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:46:13.008574 env[1321]: time="2025-10-31T00:46:13.008321761Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd pid=4113 runtime=io.containerd.runc.v2 Oct 31 00:46:13.056951 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:46:13.090606 env[1321]: time="2025-10-31T00:46:13.090405002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94987b775-fhccb,Uid:07e12617-5c5d-4e42-9bef-37ca707707aa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd\"" Oct 31 00:46:13.094641 systemd-networkd[1103]: cali5dff92c2027: Link UP Oct 31 00:46:13.097763 systemd-networkd[1103]: cali5dff92c2027: Gained carrier Oct 31 00:46:13.098438 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5dff92c2027: link becomes ready Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:12.700 [INFO][3887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0 calico-apiserver-94987b775- calico-apiserver 68befc49-9413-4be7-9089-5bb6c17bda13 997 0 2025-10-31 00:45:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94987b775 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-94987b775-7bbdc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5dff92c2027 [] [] }} ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-7bbdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--7bbdc-" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:12.700 [INFO][3887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-7bbdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:12.762 [INFO][3940] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" HandleID="k8s-pod-network.f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:12.762 [INFO][3940] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" HandleID="k8s-pod-network.f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-94987b775-7bbdc", "timestamp":"2025-10-31 00:46:12.762633213 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:12.762 [INFO][3940] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:12.963 [INFO][3940] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:12.964 [INFO][3940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.017 [INFO][3940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" host="localhost" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.030 [INFO][3940] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.040 [INFO][3940] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.043 [INFO][3940] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.047 [INFO][3940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.047 [INFO][3940] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" host="localhost" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.054 [INFO][3940] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5 Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.066 [INFO][3940] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" host="localhost" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.076 [INFO][3940] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" host="localhost" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.076 [INFO][3940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" host="localhost" Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.076 [INFO][3940] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:13.118730 env[1321]: 2025-10-31 00:46:13.077 [INFO][3940] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" HandleID="k8s-pod-network.f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:13.121624 env[1321]: 2025-10-31 00:46:13.084 [INFO][3887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-7bbdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0", GenerateName:"calico-apiserver-94987b775-", Namespace:"calico-apiserver", SelfLink:"", UID:"68befc49-9413-4be7-9089-5bb6c17bda13", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94987b775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-94987b775-7bbdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5dff92c2027", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:13.121624 env[1321]: 2025-10-31 00:46:13.084 [INFO][3887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-7bbdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:13.121624 env[1321]: 2025-10-31 00:46:13.084 [INFO][3887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5dff92c2027 ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-7bbdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:13.121624 env[1321]: 2025-10-31 00:46:13.098 [INFO][3887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-7bbdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:13.121624 env[1321]: 2025-10-31 00:46:13.099 [INFO][3887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-7bbdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0", GenerateName:"calico-apiserver-94987b775-", Namespace:"calico-apiserver", SelfLink:"", UID:"68befc49-9413-4be7-9089-5bb6c17bda13", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94987b775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5", Pod:"calico-apiserver-94987b775-7bbdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5dff92c2027", MAC:"1a:d4:f1:9d:84:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:13.121624 env[1321]: 2025-10-31 00:46:13.113 [INFO][3887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5" Namespace="calico-apiserver" Pod="calico-apiserver-94987b775-7bbdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:13.136000 audit[4159]: NETFILTER_CFG table=filter:114 family=2 entries=49 op=nft_register_chain pid=4159 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:13.136000 audit[4159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25452 a0=3 a1=fffffd5e2420 a2=0 a3=ffffaa7b3fa8 items=0 ppid=3527 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.136000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:13.142667 env[1321]: time="2025-10-31T00:46:13.142563287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:46:13.142667 env[1321]: time="2025-10-31T00:46:13.142615377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:46:13.142667 env[1321]: time="2025-10-31T00:46:13.142627059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:46:13.143109 env[1321]: time="2025-10-31T00:46:13.143063215Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5 pid=4167 runtime=io.containerd.runc.v2 Oct 31 00:46:13.178526 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:46:13.198877 env[1321]: time="2025-10-31T00:46:13.198835813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94987b775-7bbdc,Uid:68befc49-9413-4be7-9089-5bb6c17bda13,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5\"" Oct 31 00:46:13.331000 env[1321]: time="2025-10-31T00:46:13.329865057Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:13.332145 env[1321]: time="2025-10-31T00:46:13.332085166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:46:13.332668 kubelet[2117]: E1031 00:46:13.332460 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:46:13.332668 kubelet[2117]: E1031 00:46:13.332509 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:46:13.332844 kubelet[2117]: E1031 00:46:13.332763 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfz7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tzh9k_calico-system(542c9f03-90da-4571-a183-2191a31bfb63): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:13.334372 env[1321]: time="2025-10-31T00:46:13.333031171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:46:13.334581 kubelet[2117]: E1031 00:46:13.334550 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tzh9k" podUID="542c9f03-90da-4571-a183-2191a31bfb63" Oct 31 00:46:13.424636 env[1321]: time="2025-10-31T00:46:13.424597151Z" level=info msg="StopPodSandbox for \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\"" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.472 [INFO][4212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.472 [INFO][4212] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" iface="eth0" netns="/var/run/netns/cni-2b636d2f-165d-17a3-b37b-e584cc00567f" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.473 [INFO][4212] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" iface="eth0" netns="/var/run/netns/cni-2b636d2f-165d-17a3-b37b-e584cc00567f" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.473 [INFO][4212] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" iface="eth0" netns="/var/run/netns/cni-2b636d2f-165d-17a3-b37b-e584cc00567f" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.473 [INFO][4212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.473 [INFO][4212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.493 [INFO][4221] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" HandleID="k8s-pod-network.a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.493 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.493 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.501 [WARNING][4221] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" HandleID="k8s-pod-network.a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.501 [INFO][4221] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" HandleID="k8s-pod-network.a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.503 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:13.507339 env[1321]: 2025-10-31 00:46:13.505 [INFO][4212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:13.508022 env[1321]: time="2025-10-31T00:46:13.507984980Z" level=info msg="TearDown network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\" successfully" Oct 31 00:46:13.508104 env[1321]: time="2025-10-31T00:46:13.508086078Z" level=info msg="StopPodSandbox for \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\" returns successfully" Oct 31 00:46:13.508892 env[1321]: time="2025-10-31T00:46:13.508859773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c5c54685-nbdhs,Uid:40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c,Namespace:calico-system,Attempt:1,}" Oct 31 00:46:13.544639 env[1321]: time="2025-10-31T00:46:13.544588865Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:13.545520 env[1321]: time="2025-10-31T00:46:13.545471499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:46:13.545986 kubelet[2117]: E1031 00:46:13.545734 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:13.545986 kubelet[2117]: E1031 00:46:13.545800 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:13.546121 kubelet[2117]: E1031 00:46:13.546054 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvxnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-94987b775-fhccb_calico-apiserver(07e12617-5c5d-4e42-9bef-37ca707707aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:13.546743 env[1321]: time="2025-10-31T00:46:13.546700954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:46:13.548997 kubelet[2117]: E1031 00:46:13.548957 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" podUID="07e12617-5c5d-4e42-9bef-37ca707707aa" Oct 31 00:46:13.557827 kubelet[2117]: E1031 00:46:13.557785 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" podUID="07e12617-5c5d-4e42-9bef-37ca707707aa" Oct 31 00:46:13.559088 kubelet[2117]: E1031 00:46:13.559060 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:13.563638 kubelet[2117]: E1031 00:46:13.563602 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tzh9k" podUID="542c9f03-90da-4571-a183-2191a31bfb63" Oct 31 00:46:13.576436 systemd[1]: run-netns-cni\x2da064c838\x2d47cd\x2dee79\x2d886f\x2d495e14fc3148.mount: Deactivated successfully. Oct 31 00:46:13.576577 systemd[1]: run-netns-cni\x2d2b636d2f\x2d165d\x2d17a3\x2db37b\x2de584cc00567f.mount: Deactivated successfully. Oct 31 00:46:13.590915 kubelet[2117]: I1031 00:46:13.589728 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rrkhq" podStartSLOduration=37.589700397 podStartE2EDuration="37.589700397s" podCreationTimestamp="2025-10-31 00:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:46:13.588084474 +0000 UTC m=+44.281172271" watchObservedRunningTime="2025-10-31 00:46:13.589700397 +0000 UTC m=+44.282788114" Oct 31 00:46:13.598000 audit[4250]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:13.600545 kernel: kauditd_printk_skb: 571 callbacks suppressed Oct 31 00:46:13.600659 kernel: audit: type=1325 audit(1761871573.598:414): table=filter:115 family=2 entries=20 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:13.598000 audit[4250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd0bf8120 a2=0 a3=1 items=0 ppid=2228 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.607445 kernel: audit: type=1300 audit(1761871573.598:414): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd0bf8120 a2=0 a3=1 items=0 ppid=2228 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.607560 kernel: audit: type=1327 audit(1761871573.598:414): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:13.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:13.611000 audit[4250]: NETFILTER_CFG table=nat:116 family=2 entries=14 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:13.611000 audit[4250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd0bf8120 a2=0 a3=1 items=0 ppid=2228 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.619700 kernel: audit: type=1325 audit(1761871573.611:415): table=nat:116 family=2 entries=14 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:13.619809 kernel: audit: type=1300 audit(1761871573.611:415): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd0bf8120 a2=0 a3=1 items=0 ppid=2228 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.619845 kernel: audit: type=1327 audit(1761871573.611:415): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:13.611000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:13.629000 audit[4253]: NETFILTER_CFG table=filter:117 family=2 entries=17 op=nft_register_rule pid=4253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:13.629000 audit[4253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd672e470 a2=0 a3=1 items=0 ppid=2228 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.636825 kernel: audit: type=1325 audit(1761871573.629:416): table=filter:117 family=2 entries=17 op=nft_register_rule pid=4253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:13.636964 kernel: audit: type=1300 audit(1761871573.629:416): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd672e470 a2=0 a3=1 items=0 ppid=2228 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:13.639404 kernel: audit: type=1327 audit(1761871573.629:416): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:13.645000 audit[4253]: NETFILTER_CFG table=nat:118 family=2 entries=35 op=nft_register_chain pid=4253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:13.645000 audit[4253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd672e470 a2=0 a3=1 items=0 ppid=2228 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:13.649451 kernel: audit: type=1325 audit(1761871573.645:417): table=nat:118 family=2 entries=35 op=nft_register_chain pid=4253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:13.679250 systemd-networkd[1103]: calib5adf2c148a: Link UP Oct 31 00:46:13.679857 systemd-networkd[1103]: calib5adf2c148a: Gained carrier Oct 31 00:46:13.680466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib5adf2c148a: link becomes ready Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.558 [INFO][4229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0 calico-kube-controllers-67c5c54685- calico-system 40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c 1025 0 2025-10-31 00:45:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67c5c54685 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67c5c54685-nbdhs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib5adf2c148a [] [] }} ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Namespace="calico-system" Pod="calico-kube-controllers-67c5c54685-nbdhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.558 [INFO][4229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Namespace="calico-system" Pod="calico-kube-controllers-67c5c54685-nbdhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.625 [INFO][4243] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" HandleID="k8s-pod-network.22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.626 [INFO][4243] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" HandleID="k8s-pod-network.22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b1850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67c5c54685-nbdhs", "timestamp":"2025-10-31 00:46:13.625795752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.626 [INFO][4243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.626 [INFO][4243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.626 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.642 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" host="localhost" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.649 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.654 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.657 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.660 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.660 [INFO][4243] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" host="localhost" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.662 [INFO][4243] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.666 [INFO][4243] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" host="localhost" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.675 [INFO][4243] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" host="localhost" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.675 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" host="localhost" Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.675 [INFO][4243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:13.694318 env[1321]: 2025-10-31 00:46:13.675 [INFO][4243] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" HandleID="k8s-pod-network.22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.695057 env[1321]: 2025-10-31 00:46:13.677 [INFO][4229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Namespace="calico-system" Pod="calico-kube-controllers-67c5c54685-nbdhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0", GenerateName:"calico-kube-controllers-67c5c54685-", Namespace:"calico-system", SelfLink:"", UID:"40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c5c54685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67c5c54685-nbdhs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5adf2c148a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:13.695057 env[1321]: 2025-10-31 00:46:13.677 [INFO][4229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Namespace="calico-system" Pod="calico-kube-controllers-67c5c54685-nbdhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.695057 env[1321]: 2025-10-31 00:46:13.677 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5adf2c148a ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Namespace="calico-system" Pod="calico-kube-controllers-67c5c54685-nbdhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.695057 env[1321]: 2025-10-31 00:46:13.679 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Namespace="calico-system" Pod="calico-kube-controllers-67c5c54685-nbdhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.695057 env[1321]: 2025-10-31 00:46:13.680 [INFO][4229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Namespace="calico-system" Pod="calico-kube-controllers-67c5c54685-nbdhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0", GenerateName:"calico-kube-controllers-67c5c54685-", Namespace:"calico-system", SelfLink:"", UID:"40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c5c54685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c", Pod:"calico-kube-controllers-67c5c54685-nbdhs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5adf2c148a", MAC:"d6:38:03:5a:da:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:13.695057 env[1321]: 2025-10-31 00:46:13.691 [INFO][4229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c" Namespace="calico-system" Pod="calico-kube-controllers-67c5c54685-nbdhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:13.704000 audit[4264]: NETFILTER_CFG table=filter:119 family=2 entries=52 op=nft_register_chain pid=4264 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:13.704000 audit[4264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24328 a0=3 a1=ffffcc28dd70 a2=0 a3=ffff91a9efa8 items=0 ppid=3527 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:13.704000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:13.710281 env[1321]: time="2025-10-31T00:46:13.710207241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:46:13.710454 env[1321]: time="2025-10-31T00:46:13.710289535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:46:13.710454 env[1321]: time="2025-10-31T00:46:13.710317420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:46:13.710625 env[1321]: time="2025-10-31T00:46:13.710585027Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c pid=4273 runtime=io.containerd.runc.v2 Oct 31 00:46:13.730809 systemd[1]: run-containerd-runc-k8s.io-22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c-runc.u3yeln.mount: Deactivated successfully. Oct 31 00:46:13.747620 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:46:13.755222 env[1321]: time="2025-10-31T00:46:13.755159465Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:13.756776 env[1321]: time="2025-10-31T00:46:13.756710017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:46:13.757002 kubelet[2117]: E1031 00:46:13.756959 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:13.757080 kubelet[2117]: E1031 00:46:13.757015 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:13.757169 kubelet[2117]: E1031 00:46:13.757132 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zphqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-94987b775-7bbdc_calico-apiserver(68befc49-9413-4be7-9089-5bb6c17bda13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:13.761013 kubelet[2117]: E1031 00:46:13.760942 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" podUID="68befc49-9413-4be7-9089-5bb6c17bda13" Oct 31 00:46:13.772083 env[1321]: time="2025-10-31T00:46:13.772031257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c5c54685-nbdhs,Uid:40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c,Namespace:calico-system,Attempt:1,} returns sandbox id \"22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c\"" Oct 31 00:46:13.773771 env[1321]: time="2025-10-31T00:46:13.773573407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:46:13.932629 systemd-networkd[1103]: cali39625c0411a: Gained IPv6LL Oct 31 00:46:13.990683 env[1321]: time="2025-10-31T00:46:13.990627142Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:13.991613 env[1321]: time="2025-10-31T00:46:13.991565706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:46:13.991895 kubelet[2117]: E1031 00:46:13.991848 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:46:13.991964 kubelet[2117]: E1031 00:46:13.991912 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:46:13.992390 kubelet[2117]: E1031 00:46:13.992305 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5v2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67c5c54685-nbdhs_calico-system(40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:13.993574 kubelet[2117]: E1031 00:46:13.993526 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" podUID="40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c" Oct 31 00:46:13.997652 systemd-networkd[1103]: calif6f07bd2f92: Gained IPv6LL Oct 31 00:46:14.566784 kubelet[2117]: E1031 00:46:14.566738 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" podUID="68befc49-9413-4be7-9089-5bb6c17bda13" Oct 31 00:46:14.566784 kubelet[2117]: E1031 00:46:14.566762 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:14.567001 kubelet[2117]: E1031 00:46:14.566823 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tzh9k" podUID="542c9f03-90da-4571-a183-2191a31bfb63" Oct 31 00:46:14.567332 kubelet[2117]: E1031 00:46:14.567308 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" podUID="07e12617-5c5d-4e42-9bef-37ca707707aa" Oct 31 00:46:14.567436 kubelet[2117]: E1031 00:46:14.567320 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" podUID="40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c" Oct 31 00:46:14.596000 audit[4313]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=4313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:14.596000 audit[4313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff8192200 a2=0 a3=1 items=0 ppid=2228 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:14.596000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:14.604000 audit[4313]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=4313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:14.604000 audit[4313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff8192200 a2=0 a3=1 items=0 ppid=2228 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:14.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:14.956641 systemd-networkd[1103]: calic50cda0c9b6: Gained IPv6LL Oct 31 00:46:15.084597 systemd-networkd[1103]: cali5dff92c2027: Gained IPv6LL Oct 31 00:46:15.340697 systemd-networkd[1103]: calib5adf2c148a: Gained IPv6LL Oct 31 00:46:15.569260 kubelet[2117]: E1031 00:46:15.568746 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:15.569775 kubelet[2117]: E1031 00:46:15.569735 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" podUID="40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c" Oct 31 00:46:16.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.54:22-10.0.0.1:57090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:16.341063 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:57090.service. Oct 31 00:46:16.399000 audit[4317]: USER_ACCT pid=4317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:16.400918 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 57090 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:16.401000 audit[4317]: CRED_ACQ pid=4317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:16.401000 audit[4317]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd89778b0 a2=3 a3=1 items=0 ppid=1 pid=4317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:16.401000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:16.402917 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:16.410044 systemd-logind[1305]: New session 8 of user core. Oct 31 00:46:16.412613 systemd[1]: Started session-8.scope. Oct 31 00:46:16.418000 audit[4317]: USER_START pid=4317 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:16.420000 audit[4320]: CRED_ACQ pid=4320 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:16.423279 env[1321]: time="2025-10-31T00:46:16.422767103Z" level=info msg="StopPodSandbox for \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\"" Oct 31 00:46:16.425279 env[1321]: time="2025-10-31T00:46:16.422790747Z" level=info msg="StopPodSandbox for \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\"" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.486 [INFO][4343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.487 [INFO][4343] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" iface="eth0" netns="/var/run/netns/cni-32759b6b-132e-2b34-c2b8-79a474e97026" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.487 [INFO][4343] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" iface="eth0" netns="/var/run/netns/cni-32759b6b-132e-2b34-c2b8-79a474e97026" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.487 [INFO][4343] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" iface="eth0" netns="/var/run/netns/cni-32759b6b-132e-2b34-c2b8-79a474e97026" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.487 [INFO][4343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.487 [INFO][4343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.526 [INFO][4366] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" HandleID="k8s-pod-network.acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.527 [INFO][4366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.527 [INFO][4366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.543 [WARNING][4366] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" HandleID="k8s-pod-network.acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.543 [INFO][4366] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" HandleID="k8s-pod-network.acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.546 [INFO][4366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:16.555369 env[1321]: 2025-10-31 00:46:16.548 [INFO][4343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:16.558889 systemd[1]: run-netns-cni\x2d32759b6b\x2d132e\x2d2b34\x2dc2b8\x2d79a474e97026.mount: Deactivated successfully. Oct 31 00:46:16.561363 env[1321]: time="2025-10-31T00:46:16.561304602Z" level=info msg="TearDown network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\" successfully" Oct 31 00:46:16.561584 env[1321]: time="2025-10-31T00:46:16.561561724Z" level=info msg="StopPodSandbox for \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\" returns successfully" Oct 31 00:46:16.562027 kubelet[2117]: E1031 00:46:16.562001 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:16.565619 env[1321]: time="2025-10-31T00:46:16.563626104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wsg2w,Uid:ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715,Namespace:kube-system,Attempt:1,}" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.521 [INFO][4344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.521 [INFO][4344] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" iface="eth0" netns="/var/run/netns/cni-204d6944-fa7a-6a15-4926-2d1448d1768e" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.521 [INFO][4344] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" iface="eth0" netns="/var/run/netns/cni-204d6944-fa7a-6a15-4926-2d1448d1768e" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.521 [INFO][4344] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" iface="eth0" netns="/var/run/netns/cni-204d6944-fa7a-6a15-4926-2d1448d1768e" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.521 [INFO][4344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.521 [INFO][4344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.550 [INFO][4373] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" HandleID="k8s-pod-network.d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.552 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.552 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.565 [WARNING][4373] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" HandleID="k8s-pod-network.d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.565 [INFO][4373] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" HandleID="k8s-pod-network.d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.567 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:16.575234 env[1321]: 2025-10-31 00:46:16.573 [INFO][4344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:16.579498 env[1321]: time="2025-10-31T00:46:16.578895099Z" level=info msg="TearDown network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\" successfully" Oct 31 00:46:16.579498 env[1321]: time="2025-10-31T00:46:16.578932825Z" level=info msg="StopPodSandbox for \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\" returns successfully" Oct 31 00:46:16.577768 systemd[1]: run-netns-cni\x2d204d6944\x2dfa7a\x2d6a15\x2d4926\x2d2d1448d1768e.mount: Deactivated successfully. Oct 31 00:46:16.579693 env[1321]: time="2025-10-31T00:46:16.579641902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-25c9f,Uid:c0bdf479-9385-4085-afb4-2cdc588aefd9,Namespace:calico-system,Attempt:1,}" Oct 31 00:46:16.744706 sshd[4317]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:16.744000 audit[4317]: USER_END pid=4317 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:16.744000 audit[4317]: CRED_DISP pid=4317 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:16.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.54:22-10.0.0.1:57090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:16.747706 systemd-logind[1305]: Session 8 logged out. Waiting for processes to exit. Oct 31 00:46:16.747860 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:57090.service: Deactivated successfully. Oct 31 00:46:16.748765 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 00:46:16.749192 systemd-logind[1305]: Removed session 8. Oct 31 00:46:16.815012 systemd-networkd[1103]: cali459fb2ee066: Link UP Oct 31 00:46:16.816348 systemd-networkd[1103]: cali459fb2ee066: Gained carrier Oct 31 00:46:16.816456 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali459fb2ee066: link becomes ready Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.656 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--25c9f-eth0 csi-node-driver- calico-system c0bdf479-9385-4085-afb4-2cdc588aefd9 1110 0 2025-10-31 00:45:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-25c9f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali459fb2ee066 [] [] }} ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Namespace="calico-system" Pod="csi-node-driver-25c9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--25c9f-" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.656 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Namespace="calico-system" Pod="csi-node-driver-25c9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.697 [INFO][4410] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" HandleID="k8s-pod-network.de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.697 [INFO][4410] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" HandleID="k8s-pod-network.de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-25c9f", "timestamp":"2025-10-31 00:46:16.697118532 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.697 [INFO][4410] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.697 [INFO][4410] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.697 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.709 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" host="localhost" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.714 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.722 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.725 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.733 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.733 [INFO][4410] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" host="localhost" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.736 [INFO][4410] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60 Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.795 [INFO][4410] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" host="localhost" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.807 [INFO][4410] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" host="localhost" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.807 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" host="localhost" Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.807 [INFO][4410] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:16.839463 env[1321]: 2025-10-31 00:46:16.807 [INFO][4410] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" HandleID="k8s-pod-network.de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.840151 env[1321]: 2025-10-31 00:46:16.812 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Namespace="calico-system" Pod="csi-node-driver-25c9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--25c9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--25c9f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0bdf479-9385-4085-afb4-2cdc588aefd9", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-25c9f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali459fb2ee066", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:16.840151 env[1321]: 2025-10-31 00:46:16.812 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Namespace="calico-system" Pod="csi-node-driver-25c9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.840151 env[1321]: 2025-10-31 00:46:16.812 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali459fb2ee066 ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Namespace="calico-system" Pod="csi-node-driver-25c9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.840151 env[1321]: 2025-10-31 00:46:16.816 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Namespace="calico-system" Pod="csi-node-driver-25c9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.840151 env[1321]: 2025-10-31 00:46:16.816 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Namespace="calico-system" Pod="csi-node-driver-25c9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--25c9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--25c9f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0bdf479-9385-4085-afb4-2cdc588aefd9", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60", Pod:"csi-node-driver-25c9f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali459fb2ee066", MAC:"02:f2:46:a9:e6:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:16.840151 env[1321]: 2025-10-31 00:46:16.833 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60" Namespace="calico-system" Pod="csi-node-driver-25c9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:16.843000 audit[4441]: NETFILTER_CFG table=filter:122 family=2 entries=56 op=nft_register_chain pid=4441 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:16.843000 audit[4441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25516 a0=3 a1=ffffd8fc4bd0 a2=0 a3=ffffb6de7fa8 items=0 ppid=3527 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:16.843000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:16.861993 env[1321]: time="2025-10-31T00:46:16.861911435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:46:16.861993 env[1321]: time="2025-10-31T00:46:16.861951721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:46:16.861993 env[1321]: time="2025-10-31T00:46:16.861962523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:46:16.862372 env[1321]: time="2025-10-31T00:46:16.862333304Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60 pid=4450 runtime=io.containerd.runc.v2 Oct 31 00:46:16.882987 systemd-networkd[1103]: cali371646424cb: Link UP Oct 31 00:46:16.884584 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali371646424cb: link becomes ready Oct 31 00:46:16.884083 systemd-networkd[1103]: cali371646424cb: Gained carrier Oct 31 00:46:16.900308 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.661 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0 coredns-668d6bf9bc- kube-system ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715 1109 0 2025-10-31 00:45:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wsg2w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali371646424cb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsg2w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsg2w-" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.662 [INFO][4383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsg2w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.703 [INFO][4415] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" HandleID="k8s-pod-network.d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.704 [INFO][4415] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" HandleID="k8s-pod-network.d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a0760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wsg2w", "timestamp":"2025-10-31 00:46:16.703885566 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.704 [INFO][4415] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.807 [INFO][4415] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.807 [INFO][4415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.832 [INFO][4415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" host="localhost" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.844 [INFO][4415] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.849 [INFO][4415] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.850 [INFO][4415] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.853 [INFO][4415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.853 [INFO][4415] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" host="localhost" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.855 [INFO][4415] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.864 [INFO][4415] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" host="localhost" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.876 [INFO][4415] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" host="localhost" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.877 [INFO][4415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" host="localhost" Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.877 [INFO][4415] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:16.900861 env[1321]: 2025-10-31 00:46:16.877 [INFO][4415] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" HandleID="k8s-pod-network.d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.901402 env[1321]: 2025-10-31 00:46:16.881 [INFO][4383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsg2w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wsg2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali371646424cb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:16.901402 env[1321]: 2025-10-31 00:46:16.881 [INFO][4383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsg2w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.901402 env[1321]: 2025-10-31 00:46:16.881 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali371646424cb ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsg2w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.901402 env[1321]: 2025-10-31 00:46:16.884 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsg2w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.901402 env[1321]: 2025-10-31 00:46:16.884 [INFO][4383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsg2w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a", Pod:"coredns-668d6bf9bc-wsg2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali371646424cb", MAC:"82:63:25:95:89:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:16.901402 env[1321]: 2025-10-31 00:46:16.895 [INFO][4383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsg2w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:16.911000 audit[4496]: NETFILTER_CFG table=filter:123 family=2 entries=62 op=nft_register_chain pid=4496 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 00:46:16.911000 audit[4496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27948 a0=3 a1=ffffd72f30d0 a2=0 a3=ffff9ac7ffa8 items=0 ppid=3527 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:16.911000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 00:46:16.913832 env[1321]: time="2025-10-31T00:46:16.913713167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-25c9f,Uid:c0bdf479-9385-4085-afb4-2cdc588aefd9,Namespace:calico-system,Attempt:1,} returns sandbox id \"de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60\"" Oct 31 00:46:16.915639 env[1321]: time="2025-10-31T00:46:16.915605719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:46:16.916986 env[1321]: time="2025-10-31T00:46:16.916899732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:46:16.917067 env[1321]: time="2025-10-31T00:46:16.916993707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:46:16.917067 env[1321]: time="2025-10-31T00:46:16.917041635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:46:16.917661 env[1321]: time="2025-10-31T00:46:16.917605768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a pid=4501 runtime=io.containerd.runc.v2 Oct 31 00:46:16.946978 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:46:16.965939 env[1321]: time="2025-10-31T00:46:16.965894682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wsg2w,Uid:ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715,Namespace:kube-system,Attempt:1,} returns sandbox id \"d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a\"" Oct 31 00:46:16.966636 kubelet[2117]: E1031 00:46:16.966612 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:16.969399 env[1321]: time="2025-10-31T00:46:16.969355812Z" level=info msg="CreateContainer within sandbox \"d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:46:17.044629 env[1321]: time="2025-10-31T00:46:17.044494139Z" level=info msg="CreateContainer within sandbox \"d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd39dc89d267f6ecb924905e9a69d785d665ba85653407dacb570af7aab63b73\"" Oct 31 00:46:17.045635 env[1321]: time="2025-10-31T00:46:17.045525586Z" level=info msg="StartContainer for \"fd39dc89d267f6ecb924905e9a69d785d665ba85653407dacb570af7aab63b73\"" Oct 31 00:46:17.112819 env[1321]: time="2025-10-31T00:46:17.112742135Z" level=info msg="StartContainer for \"fd39dc89d267f6ecb924905e9a69d785d665ba85653407dacb570af7aab63b73\" returns successfully" Oct 31 00:46:17.121588 env[1321]: time="2025-10-31T00:46:17.121538478Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:17.133032 env[1321]: time="2025-10-31T00:46:17.132952004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:46:17.134444 kubelet[2117]: E1031 00:46:17.134387 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:46:17.134529 kubelet[2117]: E1031 00:46:17.134454 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:46:17.134696 kubelet[2117]: E1031 00:46:17.134589 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hgt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-25c9f_calico-system(c0bdf479-9385-4085-afb4-2cdc588aefd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:17.137219 env[1321]: time="2025-10-31T00:46:17.137105835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:46:17.371897 env[1321]: time="2025-10-31T00:46:17.371763222Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:17.373473 env[1321]: time="2025-10-31T00:46:17.373375443Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:46:17.373773 kubelet[2117]: E1031 00:46:17.373710 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:46:17.373773 kubelet[2117]: E1031 00:46:17.373767 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:46:17.373950 kubelet[2117]: E1031 00:46:17.373910 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hgt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-25c9f_calico-system(c0bdf479-9385-4085-afb4-2cdc588aefd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:17.375245 kubelet[2117]: E1031 00:46:17.375177 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:46:17.577010 kubelet[2117]: E1031 00:46:17.576953 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:17.581047 kubelet[2117]: E1031 00:46:17.581003 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:46:17.601434 kubelet[2117]: I1031 00:46:17.599218 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wsg2w" podStartSLOduration=41.599019172 podStartE2EDuration="41.599019172s" podCreationTimestamp="2025-10-31 00:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:46:17.597255527 +0000 UTC m=+48.290343284" watchObservedRunningTime="2025-10-31 00:46:17.599019172 +0000 UTC m=+48.292106929" Oct 31 00:46:17.620000 audit[4575]: NETFILTER_CFG table=filter:124 family=2 entries=14 op=nft_register_rule pid=4575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:17.620000 audit[4575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffff9e6d60 a2=0 a3=1 items=0 ppid=2228 pid=4575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:17.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:17.628000 audit[4575]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=4575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:17.628000 audit[4575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffff9e6d60 a2=0 a3=1 items=0 ppid=2228 pid=4575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:17.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:17.662000 audit[4577]: NETFILTER_CFG table=filter:126 family=2 entries=14 op=nft_register_rule pid=4577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:17.662000 audit[4577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc3fd7d60 a2=0 a3=1 items=0 ppid=2228 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:17.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:17.676000 audit[4577]: NETFILTER_CFG table=nat:127 family=2 entries=56 op=nft_register_chain pid=4577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:17.676000 audit[4577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffc3fd7d60 a2=0 a3=1 items=0 ppid=2228 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:17.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:18.157580 systemd-networkd[1103]: cali371646424cb: Gained IPv6LL Oct 31 00:46:18.581967 kubelet[2117]: E1031 00:46:18.581936 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:18.582968 kubelet[2117]: E1031 00:46:18.582928 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:46:18.732706 systemd-networkd[1103]: cali459fb2ee066: Gained IPv6LL Oct 31 00:46:19.583145 kubelet[2117]: E1031 00:46:19.583114 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:21.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.54:22-10.0.0.1:35472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:21.748811 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:35472.service. Oct 31 00:46:21.755803 kernel: kauditd_printk_skb: 40 callbacks suppressed Oct 31 00:46:21.755900 kernel: audit: type=1130 audit(1761871581.747:436): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.54:22-10.0.0.1:35472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:21.798000 audit[4585]: USER_ACCT pid=4585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.800654 sshd[4585]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:21.801894 sshd[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:21.800000 audit[4585]: CRED_ACQ pid=4585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.807207 kernel: audit: type=1101 audit(1761871581.798:437): pid=4585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.807285 kernel: audit: type=1103 audit(1761871581.800:438): pid=4585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.807311 kernel: audit: type=1006 audit(1761871581.800:439): pid=4585 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Oct 31 00:46:21.807880 systemd-logind[1305]: New session 9 of user core. Oct 31 00:46:21.808682 systemd[1]: Started session-9.scope. Oct 31 00:46:21.800000 audit[4585]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6820bf0 a2=3 a3=1 items=0 ppid=1 pid=4585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:21.812522 kernel: audit: type=1300 audit(1761871581.800:439): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6820bf0 a2=3 a3=1 items=0 ppid=1 pid=4585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:21.800000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:21.813793 kernel: audit: type=1327 audit(1761871581.800:439): proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:21.817000 audit[4585]: USER_START pid=4585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.818000 audit[4588]: CRED_ACQ pid=4588 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.825115 kernel: audit: type=1105 audit(1761871581.817:440): pid=4585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.825165 kernel: audit: type=1103 audit(1761871581.818:441): pid=4588 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.950629 sshd[4585]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:21.950000 audit[4585]: USER_END pid=4585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.953350 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:35472.service: Deactivated successfully. Oct 31 00:46:21.954626 systemd-logind[1305]: Session 9 logged out. Waiting for processes to exit. Oct 31 00:46:21.954629 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 00:46:21.950000 audit[4585]: CRED_DISP pid=4585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.961285 kernel: audit: type=1106 audit(1761871581.950:442): pid=4585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.961370 kernel: audit: type=1104 audit(1761871581.950:443): pid=4585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:21.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.54:22-10.0.0.1:35472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:21.959682 systemd-logind[1305]: Removed session 9. Oct 31 00:46:24.423595 env[1321]: time="2025-10-31T00:46:24.423545399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:46:24.622601 env[1321]: time="2025-10-31T00:46:24.622547087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:24.623572 env[1321]: time="2025-10-31T00:46:24.623523629Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:46:24.623825 kubelet[2117]: E1031 00:46:24.623769 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:46:24.624110 kubelet[2117]: E1031 00:46:24.623836 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:46:24.624110 kubelet[2117]: E1031 00:46:24.623933 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed710fbf9c8d49d2a72c1ed130a86450,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wg2tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5687bd54d-fctt7_calico-system(69426558-399f-4dbc-9939-230d74bb54fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:24.626080 env[1321]: time="2025-10-31T00:46:24.626040435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:46:24.825670 env[1321]: time="2025-10-31T00:46:24.825614286Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:24.826621 env[1321]: time="2025-10-31T00:46:24.826551503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:46:24.826868 kubelet[2117]: E1031 00:46:24.826826 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:46:24.826937 kubelet[2117]: E1031 00:46:24.826884 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:46:24.827047 kubelet[2117]: E1031 00:46:24.827006 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wg2tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5687bd54d-fctt7_calico-system(69426558-399f-4dbc-9939-230d74bb54fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:24.828243 kubelet[2117]: E1031 00:46:24.828203 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5687bd54d-fctt7" podUID="69426558-399f-4dbc-9939-230d74bb54fd" Oct 31 00:46:26.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.54:22-10.0.0.1:35488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:26.954919 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:35488.service. Oct 31 00:46:26.955786 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 00:46:26.955961 kernel: audit: type=1130 audit(1761871586.954:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.54:22-10.0.0.1:35488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:26.993000 audit[4604]: USER_ACCT pid=4604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:26.993713 sshd[4604]: Accepted publickey for core from 10.0.0.1 port 35488 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:26.995204 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:26.994000 audit[4604]: CRED_ACQ pid=4604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:26.999772 systemd[1]: Started session-10.scope. Oct 31 00:46:27.000059 kernel: audit: type=1101 audit(1761871586.993:446): pid=4604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.000102 kernel: audit: type=1103 audit(1761871586.994:447): pid=4604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.000832 systemd-logind[1305]: New session 10 of user core. Oct 31 00:46:27.002333 kernel: audit: type=1006 audit(1761871586.994:448): pid=4604 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Oct 31 00:46:27.002400 kernel: audit: type=1300 audit(1761871586.994:448): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6cf8550 a2=3 a3=1 items=0 ppid=1 pid=4604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:26.994000 audit[4604]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6cf8550 a2=3 a3=1 items=0 ppid=1 pid=4604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:26.994000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:27.007469 kernel: audit: type=1327 audit(1761871586.994:448): proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:27.007608 kernel: audit: type=1105 audit(1761871587.006:449): pid=4604 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.006000 audit[4604]: USER_START pid=4604 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.007000 audit[4607]: CRED_ACQ pid=4607 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.015883 kernel: audit: type=1103 audit(1761871587.007:450): pid=4607 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.138709 sshd[4604]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:27.139364 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:35502.service. Oct 31 00:46:27.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.54:22-10.0.0.1:35502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:27.140000 audit[4604]: USER_END pid=4604 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.143586 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:35488.service: Deactivated successfully. Oct 31 00:46:27.144475 systemd-logind[1305]: Session 10 logged out. Waiting for processes to exit. Oct 31 00:46:27.145080 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 00:46:27.145786 systemd-logind[1305]: Removed session 10. Oct 31 00:46:27.146981 kernel: audit: type=1130 audit(1761871587.139:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.54:22-10.0.0.1:35502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:27.147104 kernel: audit: type=1106 audit(1761871587.140:452): pid=4604 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.141000 audit[4604]: CRED_DISP pid=4604 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.54:22-10.0.0.1:35488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:27.183000 audit[4617]: USER_ACCT pid=4617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.184352 sshd[4617]: Accepted publickey for core from 10.0.0.1 port 35502 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:27.184000 audit[4617]: CRED_ACQ pid=4617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.184000 audit[4617]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc51da8d0 a2=3 a3=1 items=0 ppid=1 pid=4617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:27.184000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:27.185787 sshd[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:27.189754 systemd-logind[1305]: New session 11 of user core. Oct 31 00:46:27.190530 systemd[1]: Started session-11.scope. Oct 31 00:46:27.194000 audit[4617]: USER_START pid=4617 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.195000 audit[4622]: CRED_ACQ pid=4622 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.358298 sshd[4617]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:27.361000 audit[4617]: USER_END pid=4617 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.361000 audit[4617]: CRED_DISP pid=4617 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.54:22-10.0.0.1:35516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:27.364577 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:35516.service. Oct 31 00:46:27.368976 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:35502.service: Deactivated successfully. Oct 31 00:46:27.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.54:22-10.0.0.1:35502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:27.370463 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 00:46:27.374023 systemd-logind[1305]: Session 11 logged out. Waiting for processes to exit. Oct 31 00:46:27.375922 systemd-logind[1305]: Removed session 11. Oct 31 00:46:27.408000 audit[4629]: USER_ACCT pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.409627 sshd[4629]: Accepted publickey for core from 10.0.0.1 port 35516 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:27.410000 audit[4629]: CRED_ACQ pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.411000 audit[4629]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe5e3010 a2=3 a3=1 items=0 ppid=1 pid=4629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:27.411000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:27.411807 sshd[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:27.417358 systemd-logind[1305]: New session 12 of user core. Oct 31 00:46:27.417609 systemd[1]: Started session-12.scope. Oct 31 00:46:27.420000 audit[4629]: USER_START pid=4629 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.422000 audit[4634]: CRED_ACQ pid=4634 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.424350 env[1321]: time="2025-10-31T00:46:27.424306102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:46:27.546186 sshd[4629]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:27.546000 audit[4629]: USER_END pid=4629 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.546000 audit[4629]: CRED_DISP pid=4629 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:27.548750 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:35516.service: Deactivated successfully. Oct 31 00:46:27.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.54:22-10.0.0.1:35516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:27.549809 systemd-logind[1305]: Session 12 logged out. Waiting for processes to exit. Oct 31 00:46:27.549869 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 00:46:27.550603 systemd-logind[1305]: Removed session 12. Oct 31 00:46:27.641358 env[1321]: time="2025-10-31T00:46:27.641217501Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:27.642872 env[1321]: time="2025-10-31T00:46:27.642758117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:46:27.643057 kubelet[2117]: E1031 00:46:27.642990 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:46:27.643057 kubelet[2117]: E1031 00:46:27.643039 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:46:27.643390 kubelet[2117]: E1031 00:46:27.643309 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5v2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67c5c54685-nbdhs_calico-system(40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:27.644907 kubelet[2117]: E1031 00:46:27.644399 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" podUID="40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c" Oct 31 00:46:27.645879 env[1321]: time="2025-10-31T00:46:27.644554689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:46:27.848851 env[1321]: time="2025-10-31T00:46:27.848780788Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:27.849877 env[1321]: time="2025-10-31T00:46:27.849830175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:46:27.850109 kubelet[2117]: E1031 00:46:27.850072 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:46:27.850160 kubelet[2117]: E1031 00:46:27.850123 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:46:27.850385 kubelet[2117]: E1031 00:46:27.850342 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfz7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tzh9k_calico-system(542c9f03-90da-4571-a183-2191a31bfb63): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:27.850857 env[1321]: time="2025-10-31T00:46:27.850802752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:46:27.851586 kubelet[2117]: E1031 00:46:27.851555 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tzh9k" podUID="542c9f03-90da-4571-a183-2191a31bfb63" Oct 31 00:46:28.062570 env[1321]: time="2025-10-31T00:46:28.062510890Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:28.063579 env[1321]: time="2025-10-31T00:46:28.063530431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:46:28.064048 kubelet[2117]: E1031 00:46:28.063773 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:28.064048 kubelet[2117]: E1031 00:46:28.063844 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:28.064048 kubelet[2117]: E1031 00:46:28.063982 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zphqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-94987b775-7bbdc_calico-apiserver(68befc49-9413-4be7-9089-5bb6c17bda13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:28.065200 kubelet[2117]: E1031 00:46:28.065159 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" podUID="68befc49-9413-4be7-9089-5bb6c17bda13" Oct 31 00:46:29.408612 env[1321]: time="2025-10-31T00:46:29.408565192Z" level=info msg="StopPodSandbox for \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\"" Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.455 [WARNING][4661] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94987b775--fhccb-eth0", GenerateName:"calico-apiserver-94987b775-", Namespace:"calico-apiserver", SelfLink:"", UID:"07e12617-5c5d-4e42-9bef-37ca707707aa", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94987b775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd", Pod:"calico-apiserver-94987b775-fhccb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic50cda0c9b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.455 [INFO][4661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.455 [INFO][4661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" iface="eth0" netns="" Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.455 [INFO][4661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.455 [INFO][4661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.481 [INFO][4672] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" HandleID="k8s-pod-network.785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.481 [INFO][4672] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.481 [INFO][4672] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.494 [WARNING][4672] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" HandleID="k8s-pod-network.785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.494 [INFO][4672] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" HandleID="k8s-pod-network.785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.496 [INFO][4672] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:29.500362 env[1321]: 2025-10-31 00:46:29.498 [INFO][4661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:29.500979 env[1321]: time="2025-10-31T00:46:29.500936451Z" level=info msg="TearDown network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\" successfully" Oct 31 00:46:29.501069 env[1321]: time="2025-10-31T00:46:29.501049266Z" level=info msg="StopPodSandbox for \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\" returns successfully" Oct 31 00:46:29.501706 env[1321]: time="2025-10-31T00:46:29.501671432Z" level=info msg="RemovePodSandbox for \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\"" Oct 31 00:46:29.501775 env[1321]: time="2025-10-31T00:46:29.501718598Z" level=info msg="Forcibly stopping sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\"" Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.540 [WARNING][4690] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94987b775--fhccb-eth0", GenerateName:"calico-apiserver-94987b775-", Namespace:"calico-apiserver", SelfLink:"", UID:"07e12617-5c5d-4e42-9bef-37ca707707aa", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94987b775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79d883e93d1b7231a6f3e6d0748d94b7c6edece398707c812b8f75ed3821d5bd", Pod:"calico-apiserver-94987b775-fhccb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic50cda0c9b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.540 [INFO][4690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.540 [INFO][4690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" iface="eth0" netns="" Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.540 [INFO][4690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.540 [INFO][4690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.563 [INFO][4698] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" HandleID="k8s-pod-network.785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.563 [INFO][4698] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.563 [INFO][4698] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.573 [WARNING][4698] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" HandleID="k8s-pod-network.785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.573 [INFO][4698] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" HandleID="k8s-pod-network.785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Workload="localhost-k8s-calico--apiserver--94987b775--fhccb-eth0" Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.575 [INFO][4698] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:29.578288 env[1321]: 2025-10-31 00:46:29.576 [INFO][4690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873" Oct 31 00:46:29.578780 env[1321]: time="2025-10-31T00:46:29.578327770Z" level=info msg="TearDown network for sandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\" successfully" Oct 31 00:46:29.582549 env[1321]: time="2025-10-31T00:46:29.582511265Z" level=info msg="RemovePodSandbox \"785f43513f8e9cce59f434073e3396dbd5820904064201c03caa81823178c873\" returns successfully" Oct 31 00:46:29.583215 env[1321]: time="2025-10-31T00:46:29.583187118Z" level=info msg="StopPodSandbox for \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\"" Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.648 [WARNING][4716] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"09069f0b-a951-47c9-a38b-43b3cfe8a3b6", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a", Pod:"coredns-668d6bf9bc-rrkhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6f07bd2f92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.648 [INFO][4716] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.648 [INFO][4716] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" iface="eth0" netns="" Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.648 [INFO][4716] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.648 [INFO][4716] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.668 [INFO][4725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" HandleID="k8s-pod-network.042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.668 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.668 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.677 [WARNING][4725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" HandleID="k8s-pod-network.042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.678 [INFO][4725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" HandleID="k8s-pod-network.042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.679 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:29.683855 env[1321]: 2025-10-31 00:46:29.681 [INFO][4716] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:29.683855 env[1321]: time="2025-10-31T00:46:29.683150220Z" level=info msg="TearDown network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\" successfully" Oct 31 00:46:29.683855 env[1321]: time="2025-10-31T00:46:29.683190346Z" level=info msg="StopPodSandbox for \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\" returns successfully" Oct 31 00:46:29.684386 env[1321]: time="2025-10-31T00:46:29.684012259Z" level=info msg="RemovePodSandbox for \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\"" Oct 31 00:46:29.684386 env[1321]: time="2025-10-31T00:46:29.684059585Z" level=info msg="Forcibly stopping sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\"" Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.720 [WARNING][4744] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"09069f0b-a951-47c9-a38b-43b3cfe8a3b6", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3769560c212a8e9f2487f7421f33e6d70d4d0c890fb4e726381564b9e7bb202a", Pod:"coredns-668d6bf9bc-rrkhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6f07bd2f92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.720 [INFO][4744] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.720 [INFO][4744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" iface="eth0" netns="" Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.720 [INFO][4744] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.720 [INFO][4744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.739 [INFO][4754] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" HandleID="k8s-pod-network.042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.739 [INFO][4754] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.739 [INFO][4754] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.748 [WARNING][4754] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" HandleID="k8s-pod-network.042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.749 [INFO][4754] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" HandleID="k8s-pod-network.042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Workload="localhost-k8s-coredns--668d6bf9bc--rrkhq-eth0" Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.751 [INFO][4754] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:29.755010 env[1321]: 2025-10-31 00:46:29.753 [INFO][4744] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c" Oct 31 00:46:29.755513 env[1321]: time="2025-10-31T00:46:29.755056066Z" level=info msg="TearDown network for sandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\" successfully" Oct 31 00:46:29.758501 env[1321]: time="2025-10-31T00:46:29.758454333Z" level=info msg="RemovePodSandbox \"042a008c03b39049f24765eea7909e130351743ce9472844cf4cbbf1c6ca3f6c\" returns successfully" Oct 31 00:46:29.758967 env[1321]: time="2025-10-31T00:46:29.758937759Z" level=info msg="StopPodSandbox for \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\"" Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.795 [WARNING][4772] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0", GenerateName:"calico-kube-controllers-67c5c54685-", Namespace:"calico-system", SelfLink:"", UID:"40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c5c54685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c", Pod:"calico-kube-controllers-67c5c54685-nbdhs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5adf2c148a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.795 [INFO][4772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.795 [INFO][4772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" iface="eth0" netns="" Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.795 [INFO][4772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.795 [INFO][4772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.820 [INFO][4781] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" HandleID="k8s-pod-network.a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.820 [INFO][4781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.821 [INFO][4781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.830 [WARNING][4781] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" HandleID="k8s-pod-network.a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.830 [INFO][4781] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" HandleID="k8s-pod-network.a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.832 [INFO][4781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:29.836179 env[1321]: 2025-10-31 00:46:29.834 [INFO][4772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:29.836632 env[1321]: time="2025-10-31T00:46:29.836208382Z" level=info msg="TearDown network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\" successfully" Oct 31 00:46:29.836632 env[1321]: time="2025-10-31T00:46:29.836245267Z" level=info msg="StopPodSandbox for \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\" returns successfully" Oct 31 00:46:29.836787 env[1321]: time="2025-10-31T00:46:29.836737535Z" level=info msg="RemovePodSandbox for \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\"" Oct 31 00:46:29.836832 env[1321]: time="2025-10-31T00:46:29.836776340Z" level=info msg="Forcibly stopping sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\"" Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.880 [WARNING][4800] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0", GenerateName:"calico-kube-controllers-67c5c54685-", Namespace:"calico-system", SelfLink:"", UID:"40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c5c54685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22633efd744cae224aeb9fecb5dcd139ae870927d9b70c33e90746ec57f1d98c", Pod:"calico-kube-controllers-67c5c54685-nbdhs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5adf2c148a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.880 [INFO][4800] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.880 [INFO][4800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" iface="eth0" netns="" Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.880 [INFO][4800] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.880 [INFO][4800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.900 [INFO][4809] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" HandleID="k8s-pod-network.a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.900 [INFO][4809] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.900 [INFO][4809] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.909 [WARNING][4809] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" HandleID="k8s-pod-network.a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.909 [INFO][4809] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" HandleID="k8s-pod-network.a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Workload="localhost-k8s-calico--kube--controllers--67c5c54685--nbdhs-eth0" Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.911 [INFO][4809] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:29.916108 env[1321]: 2025-10-31 00:46:29.914 [INFO][4800] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9" Oct 31 00:46:29.916602 env[1321]: time="2025-10-31T00:46:29.916102565Z" level=info msg="TearDown network for sandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\" successfully" Oct 31 00:46:29.919619 env[1321]: time="2025-10-31T00:46:29.919567841Z" level=info msg="RemovePodSandbox \"a4f9ed2ee914bc270e507b9c210fbf6681538f23bdf9c66c34a25f3257c4d6d9\" returns successfully" Oct 31 00:46:29.920107 env[1321]: time="2025-10-31T00:46:29.920078432Z" level=info msg="StopPodSandbox for \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\"" Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.955 [WARNING][4827] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tzh9k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"542c9f03-90da-4571-a183-2191a31bfb63", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29", Pod:"goldmane-666569f655-tzh9k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39625c0411a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.955 [INFO][4827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.955 [INFO][4827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" iface="eth0" netns="" Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.955 [INFO][4827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.955 [INFO][4827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.976 [INFO][4836] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" HandleID="k8s-pod-network.314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.976 [INFO][4836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.976 [INFO][4836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.985 [WARNING][4836] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" HandleID="k8s-pod-network.314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.985 [INFO][4836] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" HandleID="k8s-pod-network.314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.986 [INFO][4836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:29.990264 env[1321]: 2025-10-31 00:46:29.988 [INFO][4827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:29.990722 env[1321]: time="2025-10-31T00:46:29.990296645Z" level=info msg="TearDown network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\" successfully" Oct 31 00:46:29.990722 env[1321]: time="2025-10-31T00:46:29.990329689Z" level=info msg="StopPodSandbox for \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\" returns successfully" Oct 31 00:46:29.990864 env[1321]: time="2025-10-31T00:46:29.990826518Z" level=info msg="RemovePodSandbox for \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\"" Oct 31 00:46:29.990900 env[1321]: time="2025-10-31T00:46:29.990868163Z" level=info msg="Forcibly stopping sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\"" Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.026 [WARNING][4854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tzh9k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"542c9f03-90da-4571-a183-2191a31bfb63", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6a84f447a77ca9c8bf33db6aca3357a04363e1bbfb0ddd6def7b407d58ebf29", Pod:"goldmane-666569f655-tzh9k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39625c0411a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.026 [INFO][4854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.026 [INFO][4854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" iface="eth0" netns="" Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.026 [INFO][4854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.026 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.045 [INFO][4863] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" HandleID="k8s-pod-network.314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.045 [INFO][4863] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.045 [INFO][4863] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.055 [WARNING][4863] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" HandleID="k8s-pod-network.314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.055 [INFO][4863] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" HandleID="k8s-pod-network.314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Workload="localhost-k8s-goldmane--666569f655--tzh9k-eth0" Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.056 [INFO][4863] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:30.060214 env[1321]: 2025-10-31 00:46:30.058 [INFO][4854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3" Oct 31 00:46:30.060692 env[1321]: time="2025-10-31T00:46:30.060254346Z" level=info msg="TearDown network for sandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\" successfully" Oct 31 00:46:30.063336 env[1321]: time="2025-10-31T00:46:30.063297680Z" level=info msg="RemovePodSandbox \"314392fbeea1b62f0ea0297cb1f2180e888e88607b50a7427504c385525d94c3\" returns successfully" Oct 31 00:46:30.063868 env[1321]: time="2025-10-31T00:46:30.063839434Z" level=info msg="StopPodSandbox for \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\"" Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.098 [WARNING][4880] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a", Pod:"coredns-668d6bf9bc-wsg2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali371646424cb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.098 [INFO][4880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.098 [INFO][4880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" iface="eth0" netns="" Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.098 [INFO][4880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.098 [INFO][4880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.120 [INFO][4890] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" HandleID="k8s-pod-network.acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.120 [INFO][4890] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.120 [INFO][4890] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.130 [WARNING][4890] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" HandleID="k8s-pod-network.acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.130 [INFO][4890] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" HandleID="k8s-pod-network.acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.135 [INFO][4890] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:30.138546 env[1321]: 2025-10-31 00:46:30.136 [INFO][4880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:30.139075 env[1321]: time="2025-10-31T00:46:30.139040715Z" level=info msg="TearDown network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\" successfully" Oct 31 00:46:30.139161 env[1321]: time="2025-10-31T00:46:30.139144249Z" level=info msg="StopPodSandbox for \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\" returns successfully" Oct 31 00:46:30.139780 env[1321]: time="2025-10-31T00:46:30.139748451Z" level=info msg="RemovePodSandbox for \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\"" Oct 31 00:46:30.139842 env[1321]: time="2025-10-31T00:46:30.139794457Z" level=info msg="Forcibly stopping sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\"" Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.173 [WARNING][4909] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ed54c017-1d5f-47b7-b1f3-7a6f4e7f6715", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9017317ac9534b04ef80bfe9648dd7e31293754390a3512e8fbfe32f778d01a", Pod:"coredns-668d6bf9bc-wsg2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali371646424cb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.173 [INFO][4909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.173 [INFO][4909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" iface="eth0" netns="" Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.173 [INFO][4909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.173 [INFO][4909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.194 [INFO][4919] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" HandleID="k8s-pod-network.acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.194 [INFO][4919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.194 [INFO][4919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.203 [WARNING][4919] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" HandleID="k8s-pod-network.acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.203 [INFO][4919] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" HandleID="k8s-pod-network.acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Workload="localhost-k8s-coredns--668d6bf9bc--wsg2w-eth0" Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.204 [INFO][4919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:30.207926 env[1321]: 2025-10-31 00:46:30.206 [INFO][4909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6" Oct 31 00:46:30.208532 env[1321]: time="2025-10-31T00:46:30.207961100Z" level=info msg="TearDown network for sandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\" successfully" Oct 31 00:46:30.217339 env[1321]: time="2025-10-31T00:46:30.217284890Z" level=info msg="RemovePodSandbox \"acbac1a3890d4c99770def2c4ae50530ee61a2ff6d3e807a23c4c02a9fb3b0b6\" returns successfully" Oct 31 00:46:30.217889 env[1321]: time="2025-10-31T00:46:30.217852527Z" level=info msg="StopPodSandbox for \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\"" Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.249 [WARNING][4937] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0", GenerateName:"calico-apiserver-94987b775-", Namespace:"calico-apiserver", SelfLink:"", UID:"68befc49-9413-4be7-9089-5bb6c17bda13", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94987b775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5", Pod:"calico-apiserver-94987b775-7bbdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5dff92c2027", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.249 [INFO][4937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.249 [INFO][4937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" iface="eth0" netns="" Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.249 [INFO][4937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.249 [INFO][4937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.267 [INFO][4945] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" HandleID="k8s-pod-network.6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.267 [INFO][4945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.268 [INFO][4945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.276 [WARNING][4945] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" HandleID="k8s-pod-network.6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.276 [INFO][4945] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" HandleID="k8s-pod-network.6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.278 [INFO][4945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:30.282501 env[1321]: 2025-10-31 00:46:30.279 [INFO][4937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:30.282501 env[1321]: time="2025-10-31T00:46:30.281658856Z" level=info msg="TearDown network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\" successfully" Oct 31 00:46:30.282501 env[1321]: time="2025-10-31T00:46:30.281690140Z" level=info msg="StopPodSandbox for \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\" returns successfully" Oct 31 00:46:30.282501 env[1321]: time="2025-10-31T00:46:30.282271540Z" level=info msg="RemovePodSandbox for \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\"" Oct 31 00:46:30.282501 env[1321]: time="2025-10-31T00:46:30.282303384Z" level=info msg="Forcibly stopping sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\"" Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.319 [WARNING][4963] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0", GenerateName:"calico-apiserver-94987b775-", Namespace:"calico-apiserver", SelfLink:"", UID:"68befc49-9413-4be7-9089-5bb6c17bda13", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94987b775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f757d701c75a7e363410ecbcb9025b90238b46a160e8415c254c9379f2184ab5", Pod:"calico-apiserver-94987b775-7bbdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5dff92c2027", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.319 [INFO][4963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.319 [INFO][4963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" iface="eth0" netns="" Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.319 [INFO][4963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.320 [INFO][4963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.343 [INFO][4972] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" HandleID="k8s-pod-network.6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.343 [INFO][4972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.343 [INFO][4972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.355 [WARNING][4972] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" HandleID="k8s-pod-network.6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.355 [INFO][4972] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" HandleID="k8s-pod-network.6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Workload="localhost-k8s-calico--apiserver--94987b775--7bbdc-eth0" Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.356 [INFO][4972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:30.360243 env[1321]: 2025-10-31 00:46:30.358 [INFO][4963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee" Oct 31 00:46:30.360714 env[1321]: time="2025-10-31T00:46:30.360272762Z" level=info msg="TearDown network for sandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\" successfully" Oct 31 00:46:30.363185 env[1321]: time="2025-10-31T00:46:30.363152034Z" level=info msg="RemovePodSandbox \"6c8f020e488b78e5a7c8a91e5ef1aa408a63a068f3bd0e3d0a4ee05d08e666ee\" returns successfully" Oct 31 00:46:30.363659 env[1321]: time="2025-10-31T00:46:30.363630339Z" level=info msg="StopPodSandbox for \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\"" Oct 31 00:46:30.423544 env[1321]: time="2025-10-31T00:46:30.423489890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.395 [WARNING][4989] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" WorkloadEndpoint="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.395 [INFO][4989] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.395 [INFO][4989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" iface="eth0" netns="" Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.395 [INFO][4989] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.395 [INFO][4989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.412 [INFO][4998] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" HandleID="k8s-pod-network.9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Workload="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.412 [INFO][4998] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.412 [INFO][4998] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.425 [WARNING][4998] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" HandleID="k8s-pod-network.9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Workload="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.425 [INFO][4998] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" HandleID="k8s-pod-network.9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Workload="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.431 [INFO][4998] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:30.438658 env[1321]: 2025-10-31 00:46:30.437 [INFO][4989] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:30.438658 env[1321]: time="2025-10-31T00:46:30.438637273Z" level=info msg="TearDown network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\" successfully" Oct 31 00:46:30.439265 env[1321]: time="2025-10-31T00:46:30.438668357Z" level=info msg="StopPodSandbox for \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\" returns successfully" Oct 31 00:46:30.440677 env[1321]: time="2025-10-31T00:46:30.440223449Z" level=info msg="RemovePodSandbox for \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\"" Oct 31 00:46:30.441324 env[1321]: time="2025-10-31T00:46:30.441247229Z" level=info msg="Forcibly stopping sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\"" Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.487 [WARNING][5017] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" WorkloadEndpoint="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.487 [INFO][5017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.487 [INFO][5017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" iface="eth0" netns="" Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.487 [INFO][5017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.487 [INFO][5017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.507 [INFO][5026] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" HandleID="k8s-pod-network.9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Workload="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.508 [INFO][5026] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.508 [INFO][5026] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.517 [WARNING][5026] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" HandleID="k8s-pod-network.9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Workload="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.517 [INFO][5026] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" HandleID="k8s-pod-network.9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Workload="localhost-k8s-whisker--689d974798--k2nfx-eth0" Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.518 [INFO][5026] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:30.522725 env[1321]: 2025-10-31 00:46:30.520 [INFO][5017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48" Oct 31 00:46:30.523124 env[1321]: time="2025-10-31T00:46:30.522732845Z" level=info msg="TearDown network for sandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\" successfully" Oct 31 00:46:30.637012 env[1321]: time="2025-10-31T00:46:30.636822622Z" level=info msg="RemovePodSandbox \"9a18581ad23bff77e34145f241fb3c7b445d69ddf5c47aca74e6c48b5aea0c48\" returns successfully" Oct 31 00:46:30.638095 env[1321]: time="2025-10-31T00:46:30.638061190Z" level=info msg="StopPodSandbox for \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\"" Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.687 [WARNING][5043] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--25c9f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0bdf479-9385-4085-afb4-2cdc588aefd9", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60", Pod:"csi-node-driver-25c9f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali459fb2ee066", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.687 [INFO][5043] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.688 [INFO][5043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" iface="eth0" netns="" Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.688 [INFO][5043] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.688 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.707 [INFO][5051] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" HandleID="k8s-pod-network.d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.707 [INFO][5051] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.707 [INFO][5051] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.717 [WARNING][5051] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" HandleID="k8s-pod-network.d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.718 [INFO][5051] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" HandleID="k8s-pod-network.d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.719 [INFO][5051] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:30.723215 env[1321]: 2025-10-31 00:46:30.721 [INFO][5043] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:30.723662 env[1321]: time="2025-10-31T00:46:30.723239750Z" level=info msg="TearDown network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\" successfully" Oct 31 00:46:30.723662 env[1321]: time="2025-10-31T00:46:30.723270434Z" level=info msg="StopPodSandbox for \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\" returns successfully" Oct 31 00:46:30.723741 env[1321]: time="2025-10-31T00:46:30.723710214Z" level=info msg="RemovePodSandbox for \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\"" Oct 31 00:46:30.723783 env[1321]: time="2025-10-31T00:46:30.723750699Z" level=info msg="Forcibly stopping sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\"" Oct 31 00:46:30.755490 env[1321]: time="2025-10-31T00:46:30.755442375Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:30.756864 env[1321]: time="2025-10-31T00:46:30.756817122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:46:30.757087 kubelet[2117]: E1031 00:46:30.757042 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:30.757402 kubelet[2117]: E1031 00:46:30.757099 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:30.757402 kubelet[2117]: E1031 00:46:30.757318 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvxnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-94987b775-fhccb_calico-apiserver(07e12617-5c5d-4e42-9bef-37ca707707aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:30.757574 env[1321]: time="2025-10-31T00:46:30.757546821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:46:30.758591 kubelet[2117]: E1031 00:46:30.758556 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" podUID="07e12617-5c5d-4e42-9bef-37ca707707aa" Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.760 [WARNING][5070] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--25c9f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0bdf479-9385-4085-afb4-2cdc588aefd9", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de6f709e4fa6a2f18944d0032557af9f4d32ccb3bd2a5223f1ece67177037d60", Pod:"csi-node-driver-25c9f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali459fb2ee066", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.760 [INFO][5070] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.760 [INFO][5070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" iface="eth0" netns="" Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.760 [INFO][5070] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.760 [INFO][5070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.783 [INFO][5078] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" HandleID="k8s-pod-network.d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.783 [INFO][5078] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.783 [INFO][5078] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.792 [WARNING][5078] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" HandleID="k8s-pod-network.d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.792 [INFO][5078] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" HandleID="k8s-pod-network.d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Workload="localhost-k8s-csi--node--driver--25c9f-eth0" Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.793 [INFO][5078] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:46:30.797395 env[1321]: 2025-10-31 00:46:30.795 [INFO][5070] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799" Oct 31 00:46:30.797852 env[1321]: time="2025-10-31T00:46:30.797441054Z" level=info msg="TearDown network for sandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\" successfully" Oct 31 00:46:30.800216 env[1321]: time="2025-10-31T00:46:30.800173426Z" level=info msg="RemovePodSandbox \"d8d489f060f42bc646b0305bb7582013f0ec0376bb0130dd0743ea7fd2747799\" returns successfully" Oct 31 00:46:30.975176 env[1321]: time="2025-10-31T00:46:30.975119650Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:30.977891 env[1321]: time="2025-10-31T00:46:30.977842581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:46:30.978098 kubelet[2117]: E1031 00:46:30.978061 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:46:30.978154 kubelet[2117]: E1031 00:46:30.978112 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:46:30.978261 kubelet[2117]: E1031 00:46:30.978222 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hgt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-25c9f_calico-system(c0bdf479-9385-4085-afb4-2cdc588aefd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:30.980254 env[1321]: time="2025-10-31T00:46:30.980219985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:46:31.296497 env[1321]: time="2025-10-31T00:46:31.296342955Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:31.300598 env[1321]: time="2025-10-31T00:46:31.300507277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:46:31.300861 kubelet[2117]: E1031 00:46:31.300806 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:46:31.300926 kubelet[2117]: E1031 00:46:31.300859 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:46:31.301006 kubelet[2117]: E1031 00:46:31.300965 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hgt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-25c9f_calico-system(c0bdf479-9385-4085-afb4-2cdc588aefd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:31.302176 kubelet[2117]: E1031 00:46:31.302120 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:46:32.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.54:22-10.0.0.1:60740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:32.550584 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:60740.service. Oct 31 00:46:32.551609 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 31 00:46:32.551688 kernel: audit: type=1130 audit(1761871592.549:472): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.54:22-10.0.0.1:60740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:32.591000 audit[5091]: USER_ACCT pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.593608 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 60740 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:32.595001 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:32.593000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.600067 kernel: audit: type=1101 audit(1761871592.591:473): pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.600154 kernel: audit: type=1103 audit(1761871592.593:474): pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.600179 kernel: audit: type=1006 audit(1761871592.593:475): pid=5091 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Oct 31 00:46:32.593000 audit[5091]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd80c03e0 a2=3 a3=1 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:32.605783 kernel: audit: type=1300 audit(1761871592.593:475): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd80c03e0 a2=3 a3=1 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:32.593000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:32.607186 kernel: audit: type=1327 audit(1761871592.593:475): proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:32.610254 systemd-logind[1305]: New session 13 of user core. Oct 31 00:46:32.611331 systemd[1]: Started session-13.scope. Oct 31 00:46:32.620000 audit[5091]: USER_START pid=5091 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.623000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.629556 kernel: audit: type=1105 audit(1761871592.620:476): pid=5091 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.629652 kernel: audit: type=1103 audit(1761871592.623:477): pid=5094 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.757404 sshd[5091]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:32.756000 audit[5091]: USER_END pid=5091 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.757000 audit[5091]: CRED_DISP pid=5091 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.765358 kernel: audit: type=1106 audit(1761871592.756:478): pid=5091 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.765403 kernel: audit: type=1104 audit(1761871592.757:479): pid=5091 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:32.767373 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:60740.service: Deactivated successfully. Oct 31 00:46:32.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.54:22-10.0.0.1:60740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:32.768506 systemd-logind[1305]: Session 13 logged out. Waiting for processes to exit. Oct 31 00:46:32.768572 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 00:46:32.769258 systemd-logind[1305]: Removed session 13. Oct 31 00:46:37.423476 kubelet[2117]: E1031 00:46:37.423408 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5687bd54d-fctt7" podUID="69426558-399f-4dbc-9939-230d74bb54fd" Oct 31 00:46:37.759672 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:60744.service. Oct 31 00:46:37.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.54:22-10.0.0.1:60744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:37.760990 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 00:46:37.761064 kernel: audit: type=1130 audit(1761871597.758:481): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.54:22-10.0.0.1:60744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:37.802000 audit[5110]: USER_ACCT pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.804542 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 60744 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:37.807000 audit[5110]: CRED_ACQ pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.809184 sshd[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:37.814475 kernel: audit: type=1101 audit(1761871597.802:482): pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.814593 kernel: audit: type=1103 audit(1761871597.807:483): pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.814629 kernel: audit: type=1006 audit(1761871597.807:484): pid=5110 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Oct 31 00:46:37.815837 systemd-logind[1305]: New session 14 of user core. Oct 31 00:46:37.816605 kernel: audit: type=1300 audit(1761871597.807:484): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4349850 a2=3 a3=1 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:37.807000 audit[5110]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4349850 a2=3 a3=1 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:37.816197 systemd[1]: Started session-14.scope. Oct 31 00:46:37.807000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:37.822406 kernel: audit: type=1327 audit(1761871597.807:484): proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:37.825000 audit[5110]: USER_START pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.835970 kernel: audit: type=1105 audit(1761871597.825:485): pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.833000 audit[5113]: CRED_ACQ pid=5113 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.842479 kernel: audit: type=1103 audit(1761871597.833:486): pid=5113 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.975194 sshd[5110]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:37.975000 audit[5110]: USER_END pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.975000 audit[5110]: CRED_DISP pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.992046 kernel: audit: type=1106 audit(1761871597.975:487): pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.992163 kernel: audit: type=1104 audit(1761871597.975:488): pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:37.991740 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:60744.service: Deactivated successfully. Oct 31 00:46:37.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.54:22-10.0.0.1:60744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:37.994704 systemd-logind[1305]: Session 14 logged out. Waiting for processes to exit. Oct 31 00:46:37.994747 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 00:46:37.997611 systemd-logind[1305]: Removed session 14. Oct 31 00:46:40.095572 systemd[1]: run-containerd-runc-k8s.io-9140837899ca4f2d218b6d7607eaeb24a547062af74867e0bae1139788719395-runc.brpaFY.mount: Deactivated successfully. Oct 31 00:46:40.181394 kubelet[2117]: E1031 00:46:40.181287 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:40.423291 kubelet[2117]: E1031 00:46:40.423088 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tzh9k" podUID="542c9f03-90da-4571-a183-2191a31bfb63" Oct 31 00:46:40.423291 kubelet[2117]: E1031 00:46:40.423128 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" podUID="40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c" Oct 31 00:46:41.423081 kubelet[2117]: E1031 00:46:41.423037 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" podUID="68befc49-9413-4be7-9089-5bb6c17bda13" Oct 31 00:46:42.424161 kubelet[2117]: E1031 00:46:42.424113 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:46:42.978272 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:36572.service. Oct 31 00:46:42.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.54:22-10.0.0.1:36572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:42.982210 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 00:46:42.982323 kernel: audit: type=1130 audit(1761871602.977:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.54:22-10.0.0.1:36572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:43.027000 audit[5146]: USER_ACCT pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.028811 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 36572 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:43.036911 kernel: audit: type=1101 audit(1761871603.027:491): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.036000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.038030 sshd[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:43.042687 kernel: audit: type=1103 audit(1761871603.036:492): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.042800 kernel: audit: type=1006 audit(1761871603.037:493): pid=5146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Oct 31 00:46:43.037000 audit[5146]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4038c10 a2=3 a3=1 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:43.047389 kernel: audit: type=1300 audit(1761871603.037:493): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4038c10 a2=3 a3=1 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:43.047201 systemd-logind[1305]: New session 15 of user core. Oct 31 00:46:43.037000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:43.048193 systemd[1]: Started session-15.scope. Oct 31 00:46:43.048802 kernel: audit: type=1327 audit(1761871603.037:493): proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:43.059000 audit[5146]: USER_START pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.065000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.069325 kernel: audit: type=1105 audit(1761871603.059:494): pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.069515 kernel: audit: type=1103 audit(1761871603.065:495): pid=5149 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.278112 sshd[5146]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:43.278000 audit[5146]: USER_END pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.282364 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:36572.service: Deactivated successfully. Oct 31 00:46:43.283532 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 00:46:43.278000 audit[5146]: CRED_DISP pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.287369 kernel: audit: type=1106 audit(1761871603.278:496): pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.287473 kernel: audit: type=1104 audit(1761871603.278:497): pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:43.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.54:22-10.0.0.1:36572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:43.292479 systemd-logind[1305]: Session 15 logged out. Waiting for processes to exit. Oct 31 00:46:43.293177 systemd-logind[1305]: Removed session 15. Oct 31 00:46:45.423689 kubelet[2117]: E1031 00:46:45.423632 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" podUID="07e12617-5c5d-4e42-9bef-37ca707707aa" Oct 31 00:46:48.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.54:22-10.0.0.1:36582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:48.281633 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:36582.service. Oct 31 00:46:48.282531 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 00:46:48.282578 kernel: audit: type=1130 audit(1761871608.280:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.54:22-10.0.0.1:36582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:48.331000 audit[5161]: USER_ACCT pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.333479 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 36582 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:48.337446 kernel: audit: type=1101 audit(1761871608.331:500): pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.336000 audit[5161]: CRED_ACQ pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.338593 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:48.343428 kernel: audit: type=1103 audit(1761871608.336:501): pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.343555 kernel: audit: type=1006 audit(1761871608.336:502): pid=5161 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Oct 31 00:46:48.343578 kernel: audit: type=1300 audit(1761871608.336:502): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9d09040 a2=3 a3=1 items=0 ppid=1 pid=5161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:48.336000 audit[5161]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9d09040 a2=3 a3=1 items=0 ppid=1 pid=5161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:48.336000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:48.350585 kernel: audit: type=1327 audit(1761871608.336:502): proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:48.349379 systemd-logind[1305]: New session 16 of user core. Oct 31 00:46:48.350351 systemd[1]: Started session-16.scope. Oct 31 00:46:48.353000 audit[5161]: USER_START pid=5161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.358000 audit[5164]: CRED_ACQ pid=5164 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.363290 kernel: audit: type=1105 audit(1761871608.353:503): pid=5161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.363476 kernel: audit: type=1103 audit(1761871608.358:504): pid=5164 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.423498 env[1321]: time="2025-10-31T00:46:48.423450792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:46:48.579000 audit[5161]: USER_END pid=5161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.579080 sshd[5161]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:48.582047 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:36588.service. Oct 31 00:46:48.579000 audit[5161]: CRED_DISP pid=5161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.588535 kernel: audit: type=1106 audit(1761871608.579:505): pid=5161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.588643 kernel: audit: type=1104 audit(1761871608.579:506): pid=5161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.54:22-10.0.0.1:36588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:48.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.54:22-10.0.0.1:36582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:48.599542 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:36582.service: Deactivated successfully. Oct 31 00:46:48.601661 systemd-logind[1305]: Session 16 logged out. Waiting for processes to exit. Oct 31 00:46:48.601688 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 00:46:48.604186 systemd-logind[1305]: Removed session 16. Oct 31 00:46:48.626000 audit[5173]: USER_ACCT pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.627782 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 36588 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:48.627000 audit[5173]: CRED_ACQ pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.627000 audit[5173]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeaa0ee00 a2=3 a3=1 items=0 ppid=1 pid=5173 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:48.627000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:48.629706 sshd[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:48.634527 systemd-logind[1305]: New session 17 of user core. Oct 31 00:46:48.635039 systemd[1]: Started session-17.scope. Oct 31 00:46:48.638000 audit[5173]: USER_START pid=5173 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.640000 audit[5178]: CRED_ACQ pid=5178 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.750400 env[1321]: time="2025-10-31T00:46:48.750308928Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:48.755132 env[1321]: time="2025-10-31T00:46:48.755005364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:46:48.755519 kubelet[2117]: E1031 00:46:48.755457 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:46:48.755910 kubelet[2117]: E1031 00:46:48.755531 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:46:48.755910 kubelet[2117]: E1031 00:46:48.755671 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed710fbf9c8d49d2a72c1ed130a86450,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wg2tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5687bd54d-fctt7_calico-system(69426558-399f-4dbc-9939-230d74bb54fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:48.758214 env[1321]: time="2025-10-31T00:46:48.758175040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:46:48.941730 sshd[5173]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:48.941000 audit[5173]: USER_END pid=5173 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.941000 audit[5173]: CRED_DISP pid=5173 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.943343 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:36600.service. Oct 31 00:46:48.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.54:22-10.0.0.1:36600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:48.946404 systemd-logind[1305]: Session 17 logged out. Waiting for processes to exit. Oct 31 00:46:48.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.54:22-10.0.0.1:36588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:48.947569 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:36588.service: Deactivated successfully. Oct 31 00:46:48.949175 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 00:46:48.949758 systemd-logind[1305]: Removed session 17. Oct 31 00:46:48.991000 audit[5186]: USER_ACCT pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.992736 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 36600 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:48.992000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:48.992000 audit[5186]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff67bf240 a2=3 a3=1 items=0 ppid=1 pid=5186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:48.992000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:48.994685 sshd[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:48.999257 systemd-logind[1305]: New session 18 of user core. Oct 31 00:46:48.999973 systemd[1]: Started session-18.scope. Oct 31 00:46:49.002000 audit[5186]: USER_START pid=5186 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:49.004000 audit[5196]: CRED_ACQ pid=5196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:49.071193 env[1321]: time="2025-10-31T00:46:49.070991493Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:49.072246 env[1321]: time="2025-10-31T00:46:49.072099880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:46:49.072446 kubelet[2117]: E1031 00:46:49.072362 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:46:49.072555 kubelet[2117]: E1031 00:46:49.072445 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:46:49.072616 kubelet[2117]: E1031 00:46:49.072574 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wg2tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5687bd54d-fctt7_calico-system(69426558-399f-4dbc-9939-230d74bb54fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:49.073779 kubelet[2117]: E1031 00:46:49.073724 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5687bd54d-fctt7" podUID="69426558-399f-4dbc-9939-230d74bb54fd" Oct 31 00:46:49.422387 kubelet[2117]: E1031 00:46:49.422265 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:49.801000 audit[5208]: NETFILTER_CFG table=filter:128 family=2 entries=26 op=nft_register_rule pid=5208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:49.801000 audit[5208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd240e890 a2=0 a3=1 items=0 ppid=2228 pid=5208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:49.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:49.809000 audit[5208]: NETFILTER_CFG table=nat:129 family=2 entries=20 op=nft_register_rule pid=5208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:49.809000 audit[5208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd240e890 a2=0 a3=1 items=0 ppid=2228 pid=5208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:49.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:49.812005 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:45084.service. Oct 31 00:46:49.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.54:22-10.0.0.1:45084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:49.812525 sshd[5186]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:49.812000 audit[5186]: USER_END pid=5186 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:49.812000 audit[5186]: CRED_DISP pid=5186 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:49.815914 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:36600.service: Deactivated successfully. Oct 31 00:46:49.817861 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 00:46:49.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.54:22-10.0.0.1:36600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:49.818638 systemd-logind[1305]: Session 18 logged out. Waiting for processes to exit. Oct 31 00:46:49.823307 systemd-logind[1305]: Removed session 18. Oct 31 00:46:49.850000 audit[5214]: NETFILTER_CFG table=filter:130 family=2 entries=38 op=nft_register_rule pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:49.850000 audit[5214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc4ace900 a2=0 a3=1 items=0 ppid=2228 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:49.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:49.855650 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 45084 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:49.854000 audit[5209]: USER_ACCT pid=5209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:49.857107 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:49.855000 audit[5209]: CRED_ACQ pid=5209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:49.855000 audit[5209]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9482740 a2=3 a3=1 items=0 ppid=1 pid=5209 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:49.855000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:49.857000 audit[5214]: NETFILTER_CFG table=nat:131 family=2 entries=20 op=nft_register_rule pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:49.857000 audit[5214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc4ace900 a2=0 a3=1 items=0 ppid=2228 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:49.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:49.860852 systemd-logind[1305]: New session 19 of user core. Oct 31 00:46:49.861855 systemd[1]: Started session-19.scope. Oct 31 00:46:49.865000 audit[5209]: USER_START pid=5209 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:49.866000 audit[5216]: CRED_ACQ pid=5216 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:50.191732 sshd[5209]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:50.191000 audit[5209]: USER_END pid=5209 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:50.191000 audit[5209]: CRED_DISP pid=5209 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:50.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.54:22-10.0.0.1:45100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:50.194449 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:45100.service. Oct 31 00:46:50.195485 systemd-logind[1305]: Session 19 logged out. Waiting for processes to exit. Oct 31 00:46:50.196115 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:45084.service: Deactivated successfully. Oct 31 00:46:50.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.54:22-10.0.0.1:45084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:50.203185 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 00:46:50.203680 systemd-logind[1305]: Removed session 19. Oct 31 00:46:50.245117 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 45100 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:50.243000 audit[5223]: USER_ACCT pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:50.246599 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:50.244000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:50.244000 audit[5223]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe83ed880 a2=3 a3=1 items=0 ppid=1 pid=5223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:50.244000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:50.250248 systemd-logind[1305]: New session 20 of user core. Oct 31 00:46:50.251478 systemd[1]: Started session-20.scope. Oct 31 00:46:50.254000 audit[5223]: USER_START pid=5223 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:50.256000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:50.423276 sshd[5223]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:50.422000 audit[5223]: USER_END pid=5223 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:50.423000 audit[5223]: CRED_DISP pid=5223 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:50.426906 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:45100.service: Deactivated successfully. Oct 31 00:46:50.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.54:22-10.0.0.1:45100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:50.428150 systemd-logind[1305]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:46:50.428154 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:46:50.429124 systemd-logind[1305]: Removed session 20. Oct 31 00:46:51.425753 env[1321]: time="2025-10-31T00:46:51.425353915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:46:51.636666 env[1321]: time="2025-10-31T00:46:51.636580409Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:51.638607 env[1321]: time="2025-10-31T00:46:51.638519853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:46:51.638920 kubelet[2117]: E1031 00:46:51.638874 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:46:51.639284 kubelet[2117]: E1031 00:46:51.639261 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:46:51.639552 kubelet[2117]: E1031 00:46:51.639492 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfz7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tzh9k_calico-system(542c9f03-90da-4571-a183-2191a31bfb63): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:51.641362 kubelet[2117]: E1031 00:46:51.640909 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tzh9k" podUID="542c9f03-90da-4571-a183-2191a31bfb63" Oct 31 00:46:52.423193 env[1321]: time="2025-10-31T00:46:52.423143882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:46:52.756323 env[1321]: time="2025-10-31T00:46:52.756201785Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:52.757777 env[1321]: time="2025-10-31T00:46:52.757711292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:46:52.758050 kubelet[2117]: E1031 00:46:52.758006 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:52.758318 kubelet[2117]: E1031 00:46:52.758062 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:46:52.758318 kubelet[2117]: E1031 00:46:52.758193 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zphqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-94987b775-7bbdc_calico-apiserver(68befc49-9413-4be7-9089-5bb6c17bda13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:52.759359 kubelet[2117]: E1031 00:46:52.759309 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" podUID="68befc49-9413-4be7-9089-5bb6c17bda13" Oct 31 00:46:54.422883 env[1321]: time="2025-10-31T00:46:54.422830084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:46:54.622909 env[1321]: time="2025-10-31T00:46:54.622827791Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:54.624159 env[1321]: time="2025-10-31T00:46:54.624103835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:46:54.624400 kubelet[2117]: E1031 00:46:54.624346 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:46:54.624710 kubelet[2117]: E1031 00:46:54.624438 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:46:54.624891 kubelet[2117]: E1031 00:46:54.624797 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5v2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67c5c54685-nbdhs_calico-system(40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:54.626080 kubelet[2117]: E1031 00:46:54.626019 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" podUID="40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c" Oct 31 00:46:55.025000 audit[5242]: NETFILTER_CFG table=filter:132 family=2 entries=26 op=nft_register_rule pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:55.029277 kernel: kauditd_printk_skb: 57 callbacks suppressed Oct 31 00:46:55.029349 kernel: audit: type=1325 audit(1761871615.025:548): table=filter:132 family=2 entries=26 op=nft_register_rule pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:55.025000 audit[5242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffeab4c5a0 a2=0 a3=1 items=0 ppid=2228 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:55.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:55.035173 kernel: audit: type=1300 audit(1761871615.025:548): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffeab4c5a0 a2=0 a3=1 items=0 ppid=2228 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:55.035247 kernel: audit: type=1327 audit(1761871615.025:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:55.039000 audit[5242]: NETFILTER_CFG table=nat:133 family=2 entries=104 op=nft_register_chain pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:55.039000 audit[5242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffeab4c5a0 a2=0 a3=1 items=0 ppid=2228 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:55.047272 kernel: audit: type=1325 audit(1761871615.039:549): table=nat:133 family=2 entries=104 op=nft_register_chain pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 00:46:55.047349 kernel: audit: type=1300 audit(1761871615.039:549): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffeab4c5a0 a2=0 a3=1 items=0 ppid=2228 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:55.047376 kernel: audit: type=1327 audit(1761871615.039:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:55.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 00:46:55.421904 kubelet[2117]: E1031 00:46:55.421780 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:46:55.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.54:22-10.0.0.1:45112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:55.426705 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:45112.service. Oct 31 00:46:55.430434 kernel: audit: type=1130 audit(1761871615.425:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.54:22-10.0.0.1:45112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:55.463000 audit[5243]: USER_ACCT pid=5243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:55.465224 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 45112 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:46:55.467809 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:55.465000 audit[5243]: CRED_ACQ pid=5243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:55.472024 kernel: audit: type=1101 audit(1761871615.463:551): pid=5243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:55.472160 kernel: audit: type=1103 audit(1761871615.465:552): pid=5243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:55.465000 audit[5243]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdfe7a0c0 a2=3 a3=1 items=0 ppid=1 pid=5243 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:46:55.465000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:46:55.475690 kernel: audit: type=1006 audit(1761871615.465:553): pid=5243 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Oct 31 00:46:55.475385 systemd[1]: Started session-21.scope. Oct 31 00:46:55.475519 systemd-logind[1305]: New session 21 of user core. Oct 31 00:46:55.479000 audit[5243]: USER_START pid=5243 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:55.480000 audit[5246]: CRED_ACQ pid=5246 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:55.666092 sshd[5243]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:55.665000 audit[5243]: USER_END pid=5243 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:55.665000 audit[5243]: CRED_DISP pid=5243 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:46:55.668879 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:45112.service: Deactivated successfully. Oct 31 00:46:55.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.54:22-10.0.0.1:45112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:46:55.670044 systemd-logind[1305]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:46:55.670137 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:46:55.670896 systemd-logind[1305]: Removed session 21. Oct 31 00:46:56.423070 env[1321]: time="2025-10-31T00:46:56.423020217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:46:56.642435 env[1321]: time="2025-10-31T00:46:56.642357247Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:56.644036 env[1321]: time="2025-10-31T00:46:56.643972694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:46:56.644550 kubelet[2117]: E1031 00:46:56.644440 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:46:56.644876 kubelet[2117]: E1031 00:46:56.644566 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:46:56.644876 kubelet[2117]: E1031 00:46:56.644701 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hgt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-25c9f_calico-system(c0bdf479-9385-4085-afb4-2cdc588aefd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:56.647341 env[1321]: time="2025-10-31T00:46:56.647293787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:46:56.853785 env[1321]: time="2025-10-31T00:46:56.853666560Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:46:56.854854 env[1321]: time="2025-10-31T00:46:56.854782857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:46:56.855061 kubelet[2117]: E1031 00:46:56.855024 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:46:56.855177 kubelet[2117]: E1031 00:46:56.855159 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:46:56.855460 kubelet[2117]: E1031 00:46:56.855393 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hgt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-25c9f_calico-system(c0bdf479-9385-4085-afb4-2cdc588aefd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:46:56.857221 kubelet[2117]: E1031 00:46:56.857162 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:46:59.428775 kubelet[2117]: E1031 00:46:59.428740 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:00.422969 env[1321]: time="2025-10-31T00:47:00.422922559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:47:00.668369 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:48582.service. Oct 31 00:47:00.672197 kernel: kauditd_printk_skb: 7 callbacks suppressed Oct 31 00:47:00.672312 kernel: audit: type=1130 audit(1761871620.667:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.54:22-10.0.0.1:48582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:47:00.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.54:22-10.0.0.1:48582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:47:00.678134 env[1321]: time="2025-10-31T00:47:00.678016676Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:47:00.680451 env[1321]: time="2025-10-31T00:47:00.680330979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:47:00.693443 kubelet[2117]: E1031 00:47:00.684525 2117 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:47:00.693443 kubelet[2117]: E1031 00:47:00.684585 2117 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:47:00.693443 kubelet[2117]: E1031 00:47:00.684720 2117 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvxnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-94987b775-fhccb_calico-apiserver(07e12617-5c5d-4e42-9bef-37ca707707aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:47:00.694943 kubelet[2117]: E1031 00:47:00.694907 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" podUID="07e12617-5c5d-4e42-9bef-37ca707707aa" Oct 31 00:47:00.734000 audit[5260]: USER_ACCT pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.736628 sshd[5260]: Accepted publickey for core from 10.0.0.1 port 48582 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:47:00.739445 kernel: audit: type=1101 audit(1761871620.734:560): pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.738000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.740778 sshd[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:47:00.747069 kernel: audit: type=1103 audit(1761871620.738:561): pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.747166 kernel: audit: type=1006 audit(1761871620.738:562): pid=5260 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Oct 31 00:47:00.748033 kernel: audit: type=1300 audit(1761871620.738:562): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3cfcb60 a2=3 a3=1 items=0 ppid=1 pid=5260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:47:00.738000 audit[5260]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3cfcb60 a2=3 a3=1 items=0 ppid=1 pid=5260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:47:00.738000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:47:00.752449 kernel: audit: type=1327 audit(1761871620.738:562): proctitle=737368643A20636F7265205B707269765D Oct 31 00:47:00.754188 systemd[1]: Started session-22.scope. Oct 31 00:47:00.754463 systemd-logind[1305]: New session 22 of user core. Oct 31 00:47:00.758000 audit[5260]: USER_START pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.762000 audit[5263]: CRED_ACQ pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.766882 kernel: audit: type=1105 audit(1761871620.758:563): pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.766967 kernel: audit: type=1103 audit(1761871620.762:564): pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.892821 sshd[5260]: pam_unix(sshd:session): session closed for user core Oct 31 00:47:00.893000 audit[5260]: USER_END pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.896427 systemd-logind[1305]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:47:00.897020 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:48582.service: Deactivated successfully. Oct 31 00:47:00.897995 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:47:00.898596 systemd-logind[1305]: Removed session 22. Oct 31 00:47:00.893000 audit[5260]: CRED_DISP pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.901885 kernel: audit: type=1106 audit(1761871620.893:565): pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.902006 kernel: audit: type=1104 audit(1761871620.893:566): pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:00.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.54:22-10.0.0.1:48582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:47:01.423670 kubelet[2117]: E1031 00:47:01.423473 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5687bd54d-fctt7" podUID="69426558-399f-4dbc-9939-230d74bb54fd" Oct 31 00:47:04.423472 kubelet[2117]: E1031 00:47:04.423422 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tzh9k" podUID="542c9f03-90da-4571-a183-2191a31bfb63" Oct 31 00:47:05.423378 kubelet[2117]: E1031 00:47:05.423331 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67c5c54685-nbdhs" podUID="40a3018b-8fab-4f9d-aa6a-7e3a64b3e80c" Oct 31 00:47:05.426550 kubelet[2117]: E1031 00:47:05.426515 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-7bbdc" podUID="68befc49-9413-4be7-9089-5bb6c17bda13" Oct 31 00:47:05.895788 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:48588.service. Oct 31 00:47:05.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.54:22-10.0.0.1:48588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:47:05.896726 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 00:47:05.896780 kernel: audit: type=1130 audit(1761871625.894:568): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.54:22-10.0.0.1:48588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:47:05.933000 audit[5275]: USER_ACCT pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:05.935156 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 48588 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:47:05.937000 audit[5275]: CRED_ACQ pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:05.938853 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:47:05.942213 kernel: audit: type=1101 audit(1761871625.933:569): pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:05.942295 kernel: audit: type=1103 audit(1761871625.937:570): pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:05.949352 systemd[1]: Started session-23.scope. Oct 31 00:47:05.937000 audit[5275]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6a14db0 a2=3 a3=1 items=0 ppid=1 pid=5275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:47:05.956408 kernel: audit: type=1006 audit(1761871625.937:571): pid=5275 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Oct 31 00:47:05.956541 kernel: audit: type=1300 audit(1761871625.937:571): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6a14db0 a2=3 a3=1 items=0 ppid=1 pid=5275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:47:05.956481 systemd-logind[1305]: New session 23 of user core. Oct 31 00:47:05.937000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:47:05.958437 kernel: audit: type=1327 audit(1761871625.937:571): proctitle=737368643A20636F7265205B707269765D Oct 31 00:47:05.966000 audit[5275]: USER_START pid=5275 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:05.967000 audit[5278]: CRED_ACQ pid=5278 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:05.975818 kernel: audit: type=1105 audit(1761871625.966:572): pid=5275 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:05.975918 kernel: audit: type=1103 audit(1761871625.967:573): pid=5278 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:06.135801 sshd[5275]: pam_unix(sshd:session): session closed for user core Oct 31 00:47:06.135000 audit[5275]: USER_END pid=5275 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:06.135000 audit[5275]: CRED_DISP pid=5275 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:06.142311 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:48588.service: Deactivated successfully. Oct 31 00:47:06.143933 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:47:06.144356 systemd-logind[1305]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:47:06.145246 kernel: audit: type=1106 audit(1761871626.135:574): pid=5275 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:06.145314 kernel: audit: type=1104 audit(1761871626.135:575): pid=5275 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:06.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.54:22-10.0.0.1:48588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:47:06.145360 systemd-logind[1305]: Removed session 23. Oct 31 00:47:09.422599 kubelet[2117]: E1031 00:47:09.422559 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:10.423552 kubelet[2117]: E1031 00:47:10.423499 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-25c9f" podUID="c0bdf479-9385-4085-afb4-2cdc588aefd9" Oct 31 00:47:11.139203 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:36314.service. Oct 31 00:47:11.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.54:22-10.0.0.1:36314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:47:11.143425 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 00:47:11.143557 kernel: audit: type=1130 audit(1761871631.137:577): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.54:22-10.0.0.1:36314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:47:11.185000 audit[5314]: USER_ACCT pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.186826 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 36314 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:47:11.188479 sshd[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:47:11.185000 audit[5314]: CRED_ACQ pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.193579 kernel: audit: type=1101 audit(1761871631.185:578): pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.193645 kernel: audit: type=1103 audit(1761871631.185:579): pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.196554 kernel: audit: type=1006 audit(1761871631.185:580): pid=5314 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Oct 31 00:47:11.196645 kernel: audit: type=1300 audit(1761871631.185:580): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffceb90be0 a2=3 a3=1 items=0 ppid=1 pid=5314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:47:11.185000 audit[5314]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffceb90be0 a2=3 a3=1 items=0 ppid=1 pid=5314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:47:11.198078 systemd[1]: Started session-24.scope. Oct 31 00:47:11.199206 systemd-logind[1305]: New session 24 of user core. Oct 31 00:47:11.185000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 00:47:11.200848 kernel: audit: type=1327 audit(1761871631.185:580): proctitle=737368643A20636F7265205B707269765D Oct 31 00:47:11.202000 audit[5314]: USER_START pid=5314 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.204000 audit[5317]: CRED_ACQ pid=5317 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.211507 kernel: audit: type=1105 audit(1761871631.202:581): pid=5314 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.211602 kernel: audit: type=1103 audit(1761871631.204:582): pid=5317 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.331839 sshd[5314]: pam_unix(sshd:session): session closed for user core Oct 31 00:47:11.331000 audit[5314]: USER_END pid=5314 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.334576 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:36314.service: Deactivated successfully. Oct 31 00:47:11.335842 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:47:11.336173 systemd-logind[1305]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:47:11.331000 audit[5314]: CRED_DISP pid=5314 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.336847 systemd-logind[1305]: Removed session 24. Oct 31 00:47:11.339741 kernel: audit: type=1106 audit(1761871631.331:583): pid=5314 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.339827 kernel: audit: type=1104 audit(1761871631.331:584): pid=5314 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 00:47:11.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.54:22-10.0.0.1:36314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:47:11.422837 kubelet[2117]: E1031 00:47:11.422640 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94987b775-fhccb" podUID="07e12617-5c5d-4e42-9bef-37ca707707aa"