Sep 13 00:22:45.718194 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:22:45.718213 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 00:22:45.718221 kernel: efi: EFI v2.70 by EDK II Sep 13 00:22:45.718227 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 13 00:22:45.718232 kernel: random: crng init done Sep 13 00:22:45.718237 kernel: ACPI: Early table checksum verification disabled Sep 13 00:22:45.718243 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 13 00:22:45.718250 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:22:45.718255 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:22:45.718260 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:22:45.718266 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:22:45.718271 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:22:45.718277 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:22:45.718282 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:22:45.718290 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:22:45.718296 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:22:45.718302 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:22:45.718307 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 13 00:22:45.718313 kernel: NUMA: Failed to initialise from firmware Sep 13 00:22:45.718319 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:22:45.718324 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 13 00:22:45.718330 kernel: Zone ranges: Sep 13 00:22:45.718335 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:22:45.718342 kernel: DMA32 empty Sep 13 00:22:45.718347 kernel: Normal empty Sep 13 00:22:45.718353 kernel: Movable zone start for each node Sep 13 00:22:45.718359 kernel: Early memory node ranges Sep 13 00:22:45.718364 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 13 00:22:45.718386 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 13 00:22:45.718393 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 13 00:22:45.718399 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 13 00:22:45.718405 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 13 00:22:45.718411 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 13 00:22:45.718417 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 13 00:22:45.718426 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:22:45.718435 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 13 00:22:45.718441 kernel: psci: probing for conduit method from ACPI. Sep 13 00:22:45.718447 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:22:45.718453 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:22:45.718459 kernel: psci: Trusted OS migration not required Sep 13 00:22:45.718468 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:22:45.718476 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:22:45.718484 kernel: ACPI: SRAT not present Sep 13 00:22:45.718490 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 00:22:45.718497 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 00:22:45.718503 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 13 00:22:45.718509 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:22:45.718516 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:22:45.718522 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:22:45.718528 kernel: CPU features: detected: Spectre-v4 Sep 13 00:22:45.718534 kernel: CPU features: detected: Spectre-BHB Sep 13 00:22:45.718542 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:22:45.718548 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:22:45.718554 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:22:45.718561 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:22:45.718567 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 13 00:22:45.718573 kernel: Policy zone: DMA Sep 13 00:22:45.718580 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:22:45.718586 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:22:45.718592 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:22:45.718598 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:22:45.718605 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:22:45.718612 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 13 00:22:45.718618 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:22:45.718624 kernel: trace event string verifier disabled Sep 13 00:22:45.718630 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:22:45.718637 kernel: rcu: RCU event tracing is enabled. Sep 13 00:22:45.718643 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:22:45.718649 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:22:45.718655 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:22:45.718661 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:22:45.718667 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:22:45.718673 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:22:45.718680 kernel: GICv3: 256 SPIs implemented Sep 13 00:22:45.718687 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:22:45.718693 kernel: GICv3: Distributor has no Range Selector support Sep 13 00:22:45.718699 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:22:45.718708 kernel: GICv3: 16 PPIs implemented Sep 13 00:22:45.718717 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:22:45.718724 kernel: ACPI: SRAT not present Sep 13 00:22:45.718732 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:22:45.718740 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:22:45.718747 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:22:45.718754 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 13 00:22:45.718760 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 13 00:22:45.718768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:22:45.718775 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:22:45.718781 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:22:45.718788 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:22:45.718794 kernel: arm-pv: using stolen time PV Sep 13 00:22:45.718800 kernel: Console: colour dummy device 80x25 Sep 13 00:22:45.718807 kernel: ACPI: Core revision 20210730 Sep 13 00:22:45.718813 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:22:45.718820 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:22:45.718826 kernel: LSM: Security Framework initializing Sep 13 00:22:45.718833 kernel: SELinux: Initializing. Sep 13 00:22:45.718840 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:22:45.718846 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:22:45.718852 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:22:45.718859 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:22:45.718865 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:22:45.718871 kernel: Remapping and enabling EFI services. Sep 13 00:22:45.718877 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:22:45.718883 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:22:45.718890 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:22:45.718897 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 13 00:22:45.718904 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:22:45.718910 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:22:45.718916 kernel: Detected PIPT I-cache on CPU2 Sep 13 00:22:45.718922 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 13 00:22:45.718929 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 13 00:22:45.718935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:22:45.718941 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 13 00:22:45.718947 kernel: Detected PIPT I-cache on CPU3 Sep 13 00:22:45.718955 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 13 00:22:45.718961 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 13 00:22:45.718967 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:22:45.718974 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 13 00:22:45.718984 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:22:45.718991 kernel: SMP: Total of 4 processors activated. Sep 13 00:22:45.718998 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:22:45.719004 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:22:45.719011 kernel: CPU features: detected: Common not Private translations Sep 13 00:22:45.719018 kernel: CPU features: detected: CRC32 instructions Sep 13 00:22:45.719024 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:22:45.719031 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:22:45.719039 kernel: CPU features: detected: Privileged Access Never Sep 13 00:22:45.719045 kernel: CPU features: detected: RAS Extension Support Sep 13 00:22:45.719052 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:22:45.719058 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:22:45.719065 kernel: alternatives: patching kernel code Sep 13 00:22:45.719072 kernel: devtmpfs: initialized Sep 13 00:22:45.719079 kernel: KASLR enabled Sep 13 00:22:45.719085 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:22:45.719092 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:22:45.719099 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:22:45.719105 kernel: SMBIOS 3.0.0 present. Sep 13 00:22:45.719112 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 13 00:22:45.719118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:22:45.719125 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:22:45.719132 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:22:45.719139 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:22:45.719146 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:22:45.719153 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Sep 13 00:22:45.719159 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:22:45.719166 kernel: cpuidle: using governor menu Sep 13 00:22:45.719173 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:22:45.719179 kernel: ASID allocator initialised with 32768 entries Sep 13 00:22:45.719186 kernel: ACPI: bus type PCI registered Sep 13 00:22:45.719194 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:22:45.719200 kernel: Serial: AMBA PL011 UART driver Sep 13 00:22:45.719207 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:22:45.719213 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:22:45.719220 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:22:45.719226 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:22:45.719233 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:22:45.719239 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:22:45.719246 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:22:45.719253 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:22:45.719260 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:22:45.719266 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:22:45.719273 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:22:45.719288 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:22:45.719295 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:22:45.719301 kernel: ACPI: Interpreter enabled Sep 13 00:22:45.719308 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:22:45.719314 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:22:45.719322 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:22:45.719328 kernel: printk: console [ttyAMA0] enabled Sep 13 00:22:45.719335 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:22:45.719466 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:22:45.719534 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:22:45.719593 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:22:45.719652 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:22:45.719727 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:22:45.719737 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:22:45.719744 kernel: PCI host bridge to bus 0000:00 Sep 13 00:22:45.719810 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:22:45.719864 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:22:45.719919 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:22:45.719971 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:22:45.720046 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:22:45.720120 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:22:45.720195 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 13 00:22:45.720256 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 13 00:22:45.720315 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:22:45.720394 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:22:45.720462 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 13 00:22:45.720526 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 13 00:22:45.720581 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:22:45.720635 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:22:45.720688 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:22:45.720697 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:22:45.720704 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:22:45.720711 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:22:45.720719 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:22:45.720725 kernel: iommu: Default domain type: Translated Sep 13 00:22:45.720732 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:22:45.720739 kernel: vgaarb: loaded Sep 13 00:22:45.720745 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:22:45.720752 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:22:45.720758 kernel: PTP clock support registered Sep 13 00:22:45.720765 kernel: Registered efivars operations Sep 13 00:22:45.720771 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:22:45.720778 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:22:45.720786 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:22:45.720792 kernel: pnp: PnP ACPI init Sep 13 00:22:45.720856 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:22:45.720866 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:22:45.720873 kernel: NET: Registered PF_INET protocol family Sep 13 00:22:45.720879 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:22:45.720886 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:22:45.720893 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:22:45.720901 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:22:45.720907 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:22:45.720914 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:22:45.720921 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:22:45.720927 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:22:45.720934 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:22:45.720940 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:22:45.720947 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:22:45.720953 kernel: kvm [1]: HYP mode not available Sep 13 00:22:45.720961 kernel: Initialise system trusted keyrings Sep 13 00:22:45.720968 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:22:45.720974 kernel: Key type asymmetric registered Sep 13 00:22:45.720981 kernel: Asymmetric key parser 'x509' registered Sep 13 00:22:45.720988 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:22:45.720994 kernel: io scheduler mq-deadline registered Sep 13 00:22:45.721001 kernel: io scheduler kyber registered Sep 13 00:22:45.721007 kernel: io scheduler bfq registered Sep 13 00:22:45.721014 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:22:45.721022 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:22:45.721029 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:22:45.721088 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 13 00:22:45.721097 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:22:45.721103 kernel: thunder_xcv, ver 1.0 Sep 13 00:22:45.721114 kernel: thunder_bgx, ver 1.0 Sep 13 00:22:45.721121 kernel: nicpf, ver 1.0 Sep 13 00:22:45.721129 kernel: nicvf, ver 1.0 Sep 13 00:22:45.721201 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:22:45.721259 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:22:45 UTC (1757722965) Sep 13 00:22:45.721268 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:22:45.721275 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:22:45.721281 kernel: Segment Routing with IPv6 Sep 13 00:22:45.721288 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:22:45.721294 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:22:45.721301 kernel: Key type dns_resolver registered Sep 13 00:22:45.721307 kernel: registered taskstats version 1 Sep 13 00:22:45.721317 kernel: Loading compiled-in X.509 certificates Sep 13 00:22:45.721324 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 00:22:45.721330 kernel: Key type .fscrypt registered Sep 13 00:22:45.721336 kernel: Key type fscrypt-provisioning registered Sep 13 00:22:45.721343 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:22:45.721350 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:22:45.721356 kernel: ima: No architecture policies found Sep 13 00:22:45.721363 kernel: clk: Disabling unused clocks Sep 13 00:22:45.721375 kernel: Freeing unused kernel memory: 36416K Sep 13 00:22:45.721408 kernel: Run /init as init process Sep 13 00:22:45.721415 kernel: with arguments: Sep 13 00:22:45.721421 kernel: /init Sep 13 00:22:45.721428 kernel: with environment: Sep 13 00:22:45.721434 kernel: HOME=/ Sep 13 00:22:45.721440 kernel: TERM=linux Sep 13 00:22:45.721447 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:22:45.721455 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:22:45.721466 systemd[1]: Detected virtualization kvm. Sep 13 00:22:45.721473 systemd[1]: Detected architecture arm64. Sep 13 00:22:45.721480 systemd[1]: Running in initrd. Sep 13 00:22:45.721487 systemd[1]: No hostname configured, using default hostname. Sep 13 00:22:45.721498 systemd[1]: Hostname set to . Sep 13 00:22:45.721505 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:22:45.721512 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:22:45.721519 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:22:45.721527 systemd[1]: Reached target cryptsetup.target. Sep 13 00:22:45.721535 systemd[1]: Reached target paths.target. Sep 13 00:22:45.721541 systemd[1]: Reached target slices.target. Sep 13 00:22:45.721548 systemd[1]: Reached target swap.target. Sep 13 00:22:45.721555 systemd[1]: Reached target timers.target. Sep 13 00:22:45.721562 systemd[1]: Listening on iscsid.socket. Sep 13 00:22:45.721569 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:22:45.721577 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:22:45.721585 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:22:45.721592 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:22:45.721598 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:22:45.721606 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:22:45.721613 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:22:45.721620 systemd[1]: Reached target sockets.target. Sep 13 00:22:45.721626 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:22:45.721633 systemd[1]: Finished network-cleanup.service. Sep 13 00:22:45.721641 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:22:45.721648 systemd[1]: Starting systemd-journald.service... Sep 13 00:22:45.721655 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:22:45.721662 systemd[1]: Starting systemd-resolved.service... Sep 13 00:22:45.721669 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:22:45.721676 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:22:45.721683 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:22:45.721690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:22:45.721700 systemd-journald[289]: Journal started Sep 13 00:22:45.721743 systemd-journald[289]: Runtime Journal (/run/log/journal/bddb7cb664a74796b933c0ae28ca056d) is 6.0M, max 48.7M, 42.6M free. Sep 13 00:22:45.719263 systemd-modules-load[290]: Inserted module 'overlay' Sep 13 00:22:45.723957 systemd[1]: Started systemd-journald.service. Sep 13 00:22:45.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.724420 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:22:45.729269 kernel: audit: type=1130 audit(1757722965.723:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.729286 kernel: audit: type=1130 audit(1757722965.726:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.727519 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:22:45.730012 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:22:45.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.734407 kernel: audit: type=1130 audit(1757722965.730:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.742448 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:22:45.742492 systemd-resolved[291]: Positive Trust Anchors: Sep 13 00:22:45.742504 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:22:45.742532 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:22:45.746634 systemd-resolved[291]: Defaulting to hostname 'linux'. Sep 13 00:22:45.752291 kernel: Bridge firewalling registered Sep 13 00:22:45.752314 kernel: audit: type=1130 audit(1757722965.749:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.747466 systemd[1]: Started systemd-resolved.service. Sep 13 00:22:45.749399 systemd-modules-load[290]: Inserted module 'br_netfilter' Sep 13 00:22:45.749470 systemd[1]: Reached target nss-lookup.target. Sep 13 00:22:45.755469 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:22:45.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.756886 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:22:45.759652 kernel: audit: type=1130 audit(1757722965.755:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.762403 kernel: SCSI subsystem initialized Sep 13 00:22:45.765393 dracut-cmdline[308]: dracut-dracut-053 Sep 13 00:22:45.767624 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:22:45.773307 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:22:45.773335 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:22:45.773345 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:22:45.773551 systemd-modules-load[290]: Inserted module 'dm_multipath' Sep 13 00:22:45.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.777413 kernel: audit: type=1130 audit(1757722965.774:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.774430 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:22:45.775851 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:22:45.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.784948 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:22:45.788408 kernel: audit: type=1130 audit(1757722965.785:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.826407 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:22:45.839406 kernel: iscsi: registered transport (tcp) Sep 13 00:22:45.854413 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:22:45.854465 kernel: QLogic iSCSI HBA Driver Sep 13 00:22:45.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.888521 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:22:45.892195 kernel: audit: type=1130 audit(1757722965.888:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:45.889955 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:22:45.934441 kernel: raid6: neonx8 gen() 13321 MB/s Sep 13 00:22:45.950402 kernel: raid6: neonx8 xor() 10478 MB/s Sep 13 00:22:45.967483 kernel: raid6: neonx4 gen() 13313 MB/s Sep 13 00:22:45.984419 kernel: raid6: neonx4 xor() 10937 MB/s Sep 13 00:22:46.001401 kernel: raid6: neonx2 gen() 12865 MB/s Sep 13 00:22:46.018406 kernel: raid6: neonx2 xor() 10220 MB/s Sep 13 00:22:46.035395 kernel: raid6: neonx1 gen() 10337 MB/s Sep 13 00:22:46.052401 kernel: raid6: neonx1 xor() 8622 MB/s Sep 13 00:22:46.069398 kernel: raid6: int64x8 gen() 6203 MB/s Sep 13 00:22:46.086401 kernel: raid6: int64x8 xor() 3495 MB/s Sep 13 00:22:46.103398 kernel: raid6: int64x4 gen() 7078 MB/s Sep 13 00:22:46.120400 kernel: raid6: int64x4 xor() 3797 MB/s Sep 13 00:22:46.137406 kernel: raid6: int64x2 gen() 6060 MB/s Sep 13 00:22:46.154398 kernel: raid6: int64x2 xor() 3276 MB/s Sep 13 00:22:46.171404 kernel: raid6: int64x1 gen() 4981 MB/s Sep 13 00:22:46.188909 kernel: raid6: int64x1 xor() 2602 MB/s Sep 13 00:22:46.188935 kernel: raid6: using algorithm neonx8 gen() 13321 MB/s Sep 13 00:22:46.188945 kernel: raid6: .... xor() 10478 MB/s, rmw enabled Sep 13 00:22:46.188954 kernel: raid6: using neon recovery algorithm Sep 13 00:22:46.200493 kernel: xor: measuring software checksum speed Sep 13 00:22:46.200532 kernel: 8regs : 17242 MB/sec Sep 13 00:22:46.201548 kernel: 32regs : 20723 MB/sec Sep 13 00:22:46.201580 kernel: arm64_neon : 27813 MB/sec Sep 13 00:22:46.201590 kernel: xor: using function: arm64_neon (27813 MB/sec) Sep 13 00:22:46.256431 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 00:22:46.268303 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:22:46.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:46.271000 audit: BPF prog-id=7 op=LOAD Sep 13 00:22:46.271000 audit: BPF prog-id=8 op=LOAD Sep 13 00:22:46.273140 kernel: audit: type=1130 audit(1757722966.268:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:46.272203 systemd[1]: Starting systemd-udevd.service... Sep 13 00:22:46.285218 systemd-udevd[494]: Using default interface naming scheme 'v252'. Sep 13 00:22:46.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:46.289709 systemd[1]: Started systemd-udevd.service. Sep 13 00:22:46.291513 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:22:46.307385 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation Sep 13 00:22:46.335323 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:22:46.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:46.336807 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:22:46.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:46.375690 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:22:46.413454 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:22:46.417599 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:22:46.417618 kernel: GPT:9289727 != 19775487 Sep 13 00:22:46.417627 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:22:46.417641 kernel: GPT:9289727 != 19775487 Sep 13 00:22:46.417650 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:22:46.417659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:22:46.441400 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (547) Sep 13 00:22:46.442667 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:22:46.443630 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:22:46.448608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:22:46.451803 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:22:46.455596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:22:46.460752 systemd[1]: Starting disk-uuid.service... Sep 13 00:22:46.467202 disk-uuid[565]: Primary Header is updated. Sep 13 00:22:46.467202 disk-uuid[565]: Secondary Entries is updated. Sep 13 00:22:46.467202 disk-uuid[565]: Secondary Header is updated. Sep 13 00:22:46.473531 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:22:46.473556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:22:46.476411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:22:47.479488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:22:47.480177 disk-uuid[566]: The operation has completed successfully. Sep 13 00:22:47.508177 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:22:47.509205 systemd[1]: Finished disk-uuid.service. Sep 13 00:22:47.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.511197 systemd[1]: Starting verity-setup.service... Sep 13 00:22:47.528410 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:22:47.563059 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:22:47.565810 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:22:47.568206 systemd[1]: Finished verity-setup.service. Sep 13 00:22:47.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.618084 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:22:47.618723 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:22:47.619425 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:22:47.620128 systemd[1]: Starting ignition-setup.service... Sep 13 00:22:47.621503 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:22:47.630776 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:22:47.630813 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:22:47.630823 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:22:47.641509 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:22:47.648792 systemd[1]: Finished ignition-setup.service. Sep 13 00:22:47.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.650770 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:22:47.708015 ignition[649]: Ignition 2.14.0 Sep 13 00:22:47.708024 ignition[649]: Stage: fetch-offline Sep 13 00:22:47.708064 ignition[649]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:22:47.708073 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:22:47.708192 ignition[649]: parsed url from cmdline: "" Sep 13 00:22:47.708195 ignition[649]: no config URL provided Sep 13 00:22:47.708200 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:22:47.708207 ignition[649]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:22:47.708225 ignition[649]: op(1): [started] loading QEMU firmware config module Sep 13 00:22:47.708230 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:22:47.715453 ignition[649]: op(1): [finished] loading QEMU firmware config module Sep 13 00:22:47.721717 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:22:47.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.723000 audit: BPF prog-id=9 op=LOAD Sep 13 00:22:47.724088 systemd[1]: Starting systemd-networkd.service... Sep 13 00:22:47.745347 systemd-networkd[742]: lo: Link UP Sep 13 00:22:47.745366 systemd-networkd[742]: lo: Gained carrier Sep 13 00:22:47.746011 systemd-networkd[742]: Enumeration completed Sep 13 00:22:47.746411 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:22:47.747834 systemd-networkd[742]: eth0: Link UP Sep 13 00:22:47.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.747838 systemd-networkd[742]: eth0: Gained carrier Sep 13 00:22:47.749945 systemd[1]: Started systemd-networkd.service. Sep 13 00:22:47.750916 systemd[1]: Reached target network.target. Sep 13 00:22:47.753134 systemd[1]: Starting iscsiuio.service... Sep 13 00:22:47.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.760514 systemd[1]: Started iscsiuio.service. Sep 13 00:22:47.762986 systemd[1]: Starting iscsid.service... Sep 13 00:22:47.766891 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:22:47.766891 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:22:47.766891 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:22:47.766891 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:22:47.766891 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:22:47.766891 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:22:47.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.778082 ignition[649]: parsing config with SHA512: 5059639f0b8461650431714eb448ff59dbea946723cf6f14f36fbd123895255d1f95d66353e5d300056668a60b2620c0463fff4efd7061b2ef1e37c38a27e601 Sep 13 00:22:47.774136 systemd[1]: Started iscsid.service. Sep 13 00:22:47.774490 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:22:47.777251 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:22:47.789192 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:22:47.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.790011 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:22:47.791064 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:22:47.792163 systemd[1]: Reached target remote-fs.target. Sep 13 00:22:47.792281 unknown[649]: fetched base config from "system" Sep 13 00:22:47.793780 ignition[649]: fetch-offline: fetch-offline passed Sep 13 00:22:47.792294 unknown[649]: fetched user config from "qemu" Sep 13 00:22:47.793844 ignition[649]: Ignition finished successfully Sep 13 00:22:47.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.794028 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:22:47.795744 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:22:47.796946 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:22:47.797624 systemd[1]: Starting ignition-kargs.service... Sep 13 00:22:47.803134 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:22:47.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.806616 ignition[758]: Ignition 2.14.0 Sep 13 00:22:47.806625 ignition[758]: Stage: kargs Sep 13 00:22:47.806714 ignition[758]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:22:47.806724 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:22:47.807647 ignition[758]: kargs: kargs passed Sep 13 00:22:47.809305 systemd[1]: Finished ignition-kargs.service. Sep 13 00:22:47.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.807691 ignition[758]: Ignition finished successfully Sep 13 00:22:47.810948 systemd[1]: Starting ignition-disks.service... Sep 13 00:22:47.817429 ignition[768]: Ignition 2.14.0 Sep 13 00:22:47.817440 ignition[768]: Stage: disks Sep 13 00:22:47.817530 ignition[768]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:22:47.817540 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:22:47.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.819185 systemd[1]: Finished ignition-disks.service. Sep 13 00:22:47.818366 ignition[768]: disks: disks passed Sep 13 00:22:47.820326 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:22:47.818427 ignition[768]: Ignition finished successfully Sep 13 00:22:47.821620 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:22:47.822658 systemd[1]: Reached target local-fs.target. Sep 13 00:22:47.823609 systemd[1]: Reached target sysinit.target. Sep 13 00:22:47.824679 systemd[1]: Reached target basic.target. Sep 13 00:22:47.826500 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:22:47.838615 systemd-fsck[775]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 13 00:22:47.844552 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:22:47.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.846948 systemd[1]: Mounting sysroot.mount... Sep 13 00:22:47.855405 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:22:47.855419 systemd[1]: Mounted sysroot.mount. Sep 13 00:22:47.856003 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:22:47.858309 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:22:47.859101 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:22:47.859139 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:22:47.859163 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:22:47.861152 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:22:47.863792 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:22:47.868098 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:22:47.873694 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:22:47.877734 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:22:47.883883 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:22:47.925989 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:22:47.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.927477 systemd[1]: Starting ignition-mount.service... Sep 13 00:22:47.928636 systemd[1]: Starting sysroot-boot.service... Sep 13 00:22:47.933409 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:22:47.943018 ignition[827]: INFO : Ignition 2.14.0 Sep 13 00:22:47.943919 ignition[827]: INFO : Stage: mount Sep 13 00:22:47.944824 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:22:47.945609 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:22:47.947447 ignition[827]: INFO : mount: mount passed Sep 13 00:22:47.948050 ignition[827]: INFO : Ignition finished successfully Sep 13 00:22:47.948194 systemd[1]: Finished ignition-mount.service. Sep 13 00:22:47.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:47.951148 systemd[1]: Finished sysroot-boot.service. Sep 13 00:22:47.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:48.578232 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:22:48.585399 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (836) Sep 13 00:22:48.586789 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:22:48.586810 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:22:48.586820 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:22:48.632065 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:22:48.633637 systemd[1]: Starting ignition-files.service... Sep 13 00:22:48.647120 ignition[856]: INFO : Ignition 2.14.0 Sep 13 00:22:48.647120 ignition[856]: INFO : Stage: files Sep 13 00:22:48.648626 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:22:48.648626 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:22:48.648626 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:22:48.651336 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:22:48.651336 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:22:48.653906 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:22:48.654981 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:22:48.656217 unknown[856]: wrote ssh authorized keys file for user: core Sep 13 00:22:48.657177 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:22:48.657177 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:22:48.657177 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:22:48.657177 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:22:48.657177 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:22:48.811589 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:22:49.128583 systemd-networkd[742]: eth0: Gained IPv6LL Sep 13 00:22:50.481265 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:22:50.481265 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:22:50.489427 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:22:50.768256 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:22:51.205030 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:22:51.205030 ignition[856]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:22:51.208046 ignition[856]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:22:51.268737 ignition[856]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:22:51.270550 ignition[856]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:22:51.270550 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:22:51.270550 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:22:51.270550 ignition[856]: INFO : files: files passed Sep 13 00:22:51.270550 ignition[856]: INFO : Ignition finished successfully Sep 13 00:22:51.280481 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:22:51.280503 kernel: audit: type=1130 audit(1757722971.274:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.273992 systemd[1]: Finished ignition-files.service. Sep 13 00:22:51.276164 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:22:51.279924 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:22:51.280771 systemd[1]: Starting ignition-quench.service... Sep 13 00:22:51.285160 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:22:51.283570 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:22:51.287051 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:22:51.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.283654 systemd[1]: Finished ignition-quench.service. Sep 13 00:22:51.295818 kernel: audit: type=1130 audit(1757722971.284:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.295846 kernel: audit: type=1131 audit(1757722971.284:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.295856 kernel: audit: type=1130 audit(1757722971.292:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.288088 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:22:51.293210 systemd[1]: Reached target ignition-complete.target. Sep 13 00:22:51.297149 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:22:51.311127 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:22:51.317184 kernel: audit: type=1130 audit(1757722971.311:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.317207 kernel: audit: type=1131 audit(1757722971.311:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.311233 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:22:51.312057 systemd[1]: Reached target initrd-fs.target. Sep 13 00:22:51.317740 systemd[1]: Reached target initrd.target. Sep 13 00:22:51.318845 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:22:51.319581 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:22:51.331687 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:22:51.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.333045 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:22:51.335822 kernel: audit: type=1130 audit(1757722971.331:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.341340 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:22:51.342064 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:22:51.343117 systemd[1]: Stopped target timers.target. Sep 13 00:22:51.344323 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:22:51.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.344460 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:22:51.348992 kernel: audit: type=1131 audit(1757722971.344:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.345437 systemd[1]: Stopped target initrd.target. Sep 13 00:22:51.348692 systemd[1]: Stopped target basic.target. Sep 13 00:22:51.349559 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:22:51.350581 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:22:51.351852 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:22:51.353080 systemd[1]: Stopped target remote-fs.target. Sep 13 00:22:51.354113 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:22:51.355191 systemd[1]: Stopped target sysinit.target. Sep 13 00:22:51.356265 systemd[1]: Stopped target local-fs.target. Sep 13 00:22:51.357394 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:22:51.358396 systemd[1]: Stopped target swap.target. Sep 13 00:22:51.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.359339 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:22:51.363872 kernel: audit: type=1131 audit(1757722971.360:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.359468 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:22:51.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.360522 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:22:51.367923 kernel: audit: type=1131 audit(1757722971.364:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.363275 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:22:51.363404 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:22:51.364520 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:22:51.364613 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:22:51.367572 systemd[1]: Stopped target paths.target. Sep 13 00:22:51.368448 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:22:51.372429 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:22:51.373173 systemd[1]: Stopped target slices.target. Sep 13 00:22:51.374209 systemd[1]: Stopped target sockets.target. Sep 13 00:22:51.375176 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:22:51.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.375282 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:22:51.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.376302 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:22:51.376412 systemd[1]: Stopped ignition-files.service. Sep 13 00:22:51.379932 iscsid[748]: iscsid shutting down. Sep 13 00:22:51.378498 systemd[1]: Stopping ignition-mount.service... Sep 13 00:22:51.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.379567 systemd[1]: Stopping iscsid.service... Sep 13 00:22:51.380293 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:22:51.380521 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:22:51.382292 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:22:51.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.383071 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:22:51.383192 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:22:51.389304 ignition[896]: INFO : Ignition 2.14.0 Sep 13 00:22:51.389304 ignition[896]: INFO : Stage: umount Sep 13 00:22:51.389304 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:22:51.389304 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:22:51.389304 ignition[896]: INFO : umount: umount passed Sep 13 00:22:51.389304 ignition[896]: INFO : Ignition finished successfully Sep 13 00:22:51.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.384893 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:22:51.384981 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:22:51.387396 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:22:51.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.387513 systemd[1]: Stopped iscsid.service. Sep 13 00:22:51.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.390536 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:22:51.390604 systemd[1]: Closed iscsid.socket. Sep 13 00:22:51.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.391410 systemd[1]: Stopping iscsiuio.service... Sep 13 00:22:51.396839 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:22:51.397278 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:22:51.397398 systemd[1]: Stopped iscsiuio.service. Sep 13 00:22:51.399153 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:22:51.399241 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:22:51.401983 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:22:51.402066 systemd[1]: Stopped ignition-mount.service. Sep 13 00:22:51.403601 systemd[1]: Stopped target network.target. Sep 13 00:22:51.404283 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:22:51.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.404315 systemd[1]: Closed iscsiuio.socket. Sep 13 00:22:51.407056 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:22:51.407124 systemd[1]: Stopped ignition-disks.service. Sep 13 00:22:51.408868 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:22:51.408907 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:22:51.411565 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:22:51.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.431000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:22:51.411607 systemd[1]: Stopped ignition-setup.service. Sep 13 00:22:51.413302 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:22:51.416159 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:22:51.425155 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:22:51.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.425260 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:22:51.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.428513 systemd-networkd[742]: eth0: DHCPv6 lease lost Sep 13 00:22:51.437000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:22:51.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.430005 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:22:51.430101 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:22:51.431253 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:22:51.431285 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:22:51.433126 systemd[1]: Stopping network-cleanup.service... Sep 13 00:22:51.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.434431 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:22:51.434498 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:22:51.436060 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:22:51.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.436098 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:22:51.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.437729 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:22:51.437772 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:22:51.438813 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:22:51.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.443163 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:22:51.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.446251 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:22:51.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.446361 systemd[1]: Stopped network-cleanup.service. Sep 13 00:22:51.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.449077 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:22:51.449161 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:22:51.451195 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:22:51.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.451311 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:22:51.452622 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:22:51.452654 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:22:51.453584 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:22:51.453617 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:22:51.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:51.455329 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:22:51.455548 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:22:51.457067 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:22:51.457107 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:22:51.458209 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:22:51.458246 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:22:51.459947 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:22:51.459986 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:22:51.461926 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:22:51.462981 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:22:51.463034 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:22:51.467067 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:22:51.467152 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:22:51.468225 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:22:51.469947 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:22:51.477673 systemd[1]: Switching root. Sep 13 00:22:51.481000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:22:51.481000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:22:51.481000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:22:51.481000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:22:51.481000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:22:51.498707 systemd-journald[289]: Journal stopped Sep 13 00:22:53.649854 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Sep 13 00:22:53.649912 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:22:53.649925 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:22:53.649936 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:22:53.649946 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:22:53.649957 kernel: SELinux: policy capability open_perms=1 Sep 13 00:22:53.649968 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:22:53.649979 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:22:53.649993 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:22:53.650002 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:22:53.650018 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:22:53.650029 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:22:53.650041 systemd[1]: Successfully loaded SELinux policy in 33.686ms. Sep 13 00:22:53.650059 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.575ms. Sep 13 00:22:53.650072 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:22:53.650084 systemd[1]: Detected virtualization kvm. Sep 13 00:22:53.650095 systemd[1]: Detected architecture arm64. Sep 13 00:22:53.650108 systemd[1]: Detected first boot. Sep 13 00:22:53.650120 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:22:53.650131 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:22:53.650142 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:22:53.650158 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:22:53.650171 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:22:53.650184 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:22:53.650196 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:22:53.650208 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:22:53.650219 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:22:53.650230 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:22:53.650244 systemd[1]: Created slice system-getty.slice. Sep 13 00:22:53.650255 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:22:53.650267 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:22:53.650279 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:22:53.650291 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:22:53.650302 systemd[1]: Created slice user.slice. Sep 13 00:22:53.650313 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:22:53.650335 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:22:53.650349 systemd[1]: Set up automount boot.automount. Sep 13 00:22:53.650360 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:22:53.650373 systemd[1]: Reached target integritysetup.target. Sep 13 00:22:53.650401 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:22:53.650414 systemd[1]: Reached target remote-fs.target. Sep 13 00:22:53.650425 systemd[1]: Reached target slices.target. Sep 13 00:22:53.650437 systemd[1]: Reached target swap.target. Sep 13 00:22:53.650448 systemd[1]: Reached target torcx.target. Sep 13 00:22:53.650460 systemd[1]: Reached target veritysetup.target. Sep 13 00:22:53.650471 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:22:53.650485 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:22:53.650497 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:22:53.650508 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:22:53.650519 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:22:53.650530 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:22:53.650542 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:22:53.650554 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:22:53.650566 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:22:53.650577 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:22:53.650589 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:22:53.650600 systemd[1]: Mounting media.mount... Sep 13 00:22:53.650611 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:22:53.650622 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:22:53.650633 systemd[1]: Mounting tmp.mount... Sep 13 00:22:53.650644 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:22:53.650655 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:22:53.650666 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:22:53.650677 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:22:53.650690 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:22:53.650701 systemd[1]: Starting modprobe@drm.service... Sep 13 00:22:53.650713 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:22:53.650724 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:22:53.650735 systemd[1]: Starting modprobe@loop.service... Sep 13 00:22:53.650746 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:22:53.650758 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:22:53.650769 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:22:53.650780 systemd[1]: Starting systemd-journald.service... Sep 13 00:22:53.650793 kernel: loop: module loaded Sep 13 00:22:53.650804 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:22:53.650816 kernel: fuse: init (API version 7.34) Sep 13 00:22:53.650826 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:22:53.650837 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:22:53.650848 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:22:53.650859 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:22:53.650870 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:22:53.650881 systemd[1]: Mounted media.mount. Sep 13 00:22:53.650892 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:22:53.650905 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:22:53.650915 systemd[1]: Mounted tmp.mount. Sep 13 00:22:53.650927 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:22:53.650938 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:22:53.650949 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:22:53.650960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:22:53.650972 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:22:53.650984 systemd-journald[1025]: Journal started Sep 13 00:22:53.651028 systemd-journald[1025]: Runtime Journal (/run/log/journal/bddb7cb664a74796b933c0ae28ca056d) is 6.0M, max 48.7M, 42.6M free. Sep 13 00:22:53.573000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:22:53.573000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:22:53.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.648000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:22:53.648000 audit[1025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd5a52a00 a2=4000 a3=1 items=0 ppid=1 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:53.648000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:22:53.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.652410 systemd[1]: Started systemd-journald.service. Sep 13 00:22:53.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.653765 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:22:53.653968 systemd[1]: Finished modprobe@drm.service. Sep 13 00:22:53.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.654901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:22:53.655079 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:22:53.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.656056 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:22:53.656235 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:22:53.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.657133 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:22:53.659520 systemd[1]: Finished modprobe@loop.service. Sep 13 00:22:53.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.660633 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:22:53.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.661890 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:22:53.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.663298 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:22:53.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.664287 systemd[1]: Reached target network-pre.target. Sep 13 00:22:53.665976 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:22:53.668173 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:22:53.669015 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:22:53.671014 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:22:53.672800 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:22:53.673790 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:22:53.674786 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:22:53.675560 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:22:53.676641 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:22:53.677967 systemd-journald[1025]: Time spent on flushing to /var/log/journal/bddb7cb664a74796b933c0ae28ca056d is 13.144ms for 931 entries. Sep 13 00:22:53.677967 systemd-journald[1025]: System Journal (/var/log/journal/bddb7cb664a74796b933c0ae28ca056d) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:22:53.700577 systemd-journald[1025]: Received client request to flush runtime journal. Sep 13 00:22:53.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.681177 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:22:53.682374 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:22:53.684028 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:22:53.686890 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:22:53.690037 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:22:53.691003 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:22:53.700293 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:22:53.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.702307 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:22:53.703686 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:22:53.705803 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:22:53.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:53.706996 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:22:53.708823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:22:53.715046 udevadm[1082]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:22:53.723912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:22:53.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.048538 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:22:54.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.050641 systemd[1]: Starting systemd-udevd.service... Sep 13 00:22:54.067956 systemd-udevd[1088]: Using default interface naming scheme 'v252'. Sep 13 00:22:54.083887 systemd[1]: Started systemd-udevd.service. Sep 13 00:22:54.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.086412 systemd[1]: Starting systemd-networkd.service... Sep 13 00:22:54.092831 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:22:54.104710 systemd[1]: Found device dev-ttyAMA0.device. Sep 13 00:22:54.142190 systemd[1]: Started systemd-userdbd.service. Sep 13 00:22:54.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.151607 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:22:54.195825 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:22:54.197971 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:22:54.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.200838 systemd-networkd[1096]: lo: Link UP Sep 13 00:22:54.200850 systemd-networkd[1096]: lo: Gained carrier Sep 13 00:22:54.201454 systemd-networkd[1096]: Enumeration completed Sep 13 00:22:54.201574 systemd-networkd[1096]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:22:54.201594 systemd[1]: Started systemd-networkd.service. Sep 13 00:22:54.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.203117 systemd-networkd[1096]: eth0: Link UP Sep 13 00:22:54.203120 systemd-networkd[1096]: eth0: Gained carrier Sep 13 00:22:54.206930 lvm[1122]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:22:54.229535 systemd-networkd[1096]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:22:54.230224 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:22:54.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.231534 systemd[1]: Reached target cryptsetup.target. Sep 13 00:22:54.233545 systemd[1]: Starting lvm2-activation.service... Sep 13 00:22:54.237099 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:22:54.271251 systemd[1]: Finished lvm2-activation.service. Sep 13 00:22:54.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.272066 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:22:54.272779 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:22:54.272805 systemd[1]: Reached target local-fs.target. Sep 13 00:22:54.273373 systemd[1]: Reached target machines.target. Sep 13 00:22:54.275119 systemd[1]: Starting ldconfig.service... Sep 13 00:22:54.277911 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.277963 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:22:54.278915 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:22:54.282893 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:22:54.286447 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:22:54.289191 systemd[1]: Starting systemd-sysext.service... Sep 13 00:22:54.293617 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1127 (bootctl) Sep 13 00:22:54.294620 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:22:54.298579 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:22:54.306332 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:22:54.306611 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:22:54.321394 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 00:22:54.321744 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:22:54.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.380899 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:22:54.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.388573 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:22:54.390044 systemd-fsck[1139]: fsck.fat 4.2 (2021-01-31) Sep 13 00:22:54.390044 systemd-fsck[1139]: /dev/vda1: 236 files, 117310/258078 clusters Sep 13 00:22:54.391548 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:22:54.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.409422 kernel: loop1: detected capacity change from 0 to 203944 Sep 13 00:22:54.415671 (sd-sysext)[1147]: Using extensions 'kubernetes'. Sep 13 00:22:54.416069 (sd-sysext)[1147]: Merged extensions into '/usr'. Sep 13 00:22:54.431904 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.433090 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:22:54.434746 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:22:54.436600 systemd[1]: Starting modprobe@loop.service... Sep 13 00:22:54.437766 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.437890 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:22:54.438615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:22:54.438756 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:22:54.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.440081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:22:54.440210 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:22:54.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.441556 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:22:54.441702 systemd[1]: Finished modprobe@loop.service. Sep 13 00:22:54.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.442783 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:22:54.442871 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.511287 ldconfig[1126]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:22:54.516744 systemd[1]: Finished ldconfig.service. Sep 13 00:22:54.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.637950 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:22:54.639720 systemd[1]: Mounting boot.mount... Sep 13 00:22:54.643622 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:22:54.651993 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:22:54.658238 systemd[1]: Finished systemd-sysext.service. Sep 13 00:22:54.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.659176 systemd[1]: Mounted boot.mount. Sep 13 00:22:54.665286 systemd[1]: Starting ensure-sysext.service... Sep 13 00:22:54.667089 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:22:54.671708 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:22:54.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.673889 systemd[1]: Reloading. Sep 13 00:22:54.676798 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:22:54.678552 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:22:54.680089 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:22:54.709162 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-09-13T00:22:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:22:54.709193 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-09-13T00:22:54Z" level=info msg="torcx already run" Sep 13 00:22:54.785936 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:22:54.785958 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:22:54.801189 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:22:54.846786 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:22:54.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.850413 systemd[1]: Starting audit-rules.service... Sep 13 00:22:54.852104 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:22:54.854035 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:22:54.856142 systemd[1]: Starting systemd-resolved.service... Sep 13 00:22:54.858399 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:22:54.860063 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:22:54.861669 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:22:54.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.864919 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:22:54.873000 audit[1237]: SYSTEM_BOOT pid=1237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.877317 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.878743 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:22:54.880497 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:22:54.882167 systemd[1]: Starting modprobe@loop.service... Sep 13 00:22:54.882891 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.883025 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:22:54.883134 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:22:54.884042 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:22:54.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.885142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:22:54.885291 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:22:54.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.887765 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:22:54.887918 systemd[1]: Finished modprobe@loop.service. Sep 13 00:22:54.889623 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:22:54.890928 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.892022 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:22:54.893981 systemd[1]: Starting modprobe@loop.service... Sep 13 00:22:54.894660 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.894775 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:22:54.894866 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:22:54.895622 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:22:54.896969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:22:54.897122 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:22:54.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.898655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:22:54.898793 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:22:54.900794 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:22:54.902663 systemd[1]: Finished modprobe@loop.service. Sep 13 00:22:54.903755 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:22:54.903856 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.905182 systemd[1]: Starting systemd-update-done.service... Sep 13 00:22:54.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.909352 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.910677 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:22:54.912565 systemd[1]: Starting modprobe@drm.service... Sep 13 00:22:54.914663 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:22:54.916617 systemd[1]: Starting modprobe@loop.service... Sep 13 00:22:54.917448 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.917585 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:22:54.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:54.928000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:22:54.928000 audit[1276]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff5c19ed0 a2=420 a3=0 items=0 ppid=1230 pid=1276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:54.928000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:22:54.929929 augenrules[1276]: No rules Sep 13 00:22:54.925490 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:22:54.926308 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:22:54.927518 systemd[1]: Finished systemd-update-done.service. Sep 13 00:22:54.928776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:22:54.928913 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:22:54.930059 systemd-resolved[1235]: Positive Trust Anchors: Sep 13 00:22:54.930077 systemd-resolved[1235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:22:54.930105 systemd-resolved[1235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:22:54.933319 systemd[1]: Finished audit-rules.service. Sep 13 00:22:54.934550 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:22:54.934693 systemd[1]: Finished modprobe@drm.service. Sep 13 00:22:54.935875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:22:54.936017 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:22:54.937393 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:22:54.937554 systemd[1]: Finished modprobe@loop.service. Sep 13 00:22:54.938955 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:22:54.939025 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.940199 systemd[1]: Finished ensure-sysext.service. Sep 13 00:22:54.941339 systemd-resolved[1235]: Defaulting to hostname 'linux'. Sep 13 00:22:54.942847 systemd[1]: Started systemd-resolved.service. Sep 13 00:22:54.943815 systemd[1]: Reached target network.target. Sep 13 00:22:54.944403 systemd[1]: Reached target nss-lookup.target. Sep 13 00:22:54.953629 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:22:54.954301 systemd-timesyncd[1236]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:22:54.954621 systemd-timesyncd[1236]: Initial clock synchronization to Sat 2025-09-13 00:22:54.884290 UTC. Sep 13 00:22:54.954716 systemd[1]: Reached target sysinit.target. Sep 13 00:22:54.955389 systemd[1]: Started motdgen.path. Sep 13 00:22:54.955929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:22:54.956763 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:22:54.957374 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:22:54.957408 systemd[1]: Reached target paths.target. Sep 13 00:22:54.957949 systemd[1]: Reached target time-set.target. Sep 13 00:22:54.958788 systemd[1]: Started logrotate.timer. Sep 13 00:22:54.959426 systemd[1]: Started mdadm.timer. Sep 13 00:22:54.959924 systemd[1]: Reached target timers.target. Sep 13 00:22:54.960790 systemd[1]: Listening on dbus.socket. Sep 13 00:22:54.962517 systemd[1]: Starting docker.socket... Sep 13 00:22:54.964170 systemd[1]: Listening on sshd.socket. Sep 13 00:22:54.964904 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:22:54.965187 systemd[1]: Listening on docker.socket. Sep 13 00:22:54.965859 systemd[1]: Reached target sockets.target. Sep 13 00:22:54.966451 systemd[1]: Reached target basic.target. Sep 13 00:22:54.967126 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:22:54.967172 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.967195 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:22:54.968209 systemd[1]: Starting containerd.service... Sep 13 00:22:54.970132 systemd[1]: Starting dbus.service... Sep 13 00:22:54.971818 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:22:54.973814 systemd[1]: Starting extend-filesystems.service... Sep 13 00:22:54.974724 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:22:54.975990 systemd[1]: Starting motdgen.service... Sep 13 00:22:54.976474 jq[1290]: false Sep 13 00:22:54.978405 systemd[1]: Starting prepare-helm.service... Sep 13 00:22:54.980005 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:22:54.981923 systemd[1]: Starting sshd-keygen.service... Sep 13 00:22:54.984192 systemd[1]: Starting systemd-logind.service... Sep 13 00:22:54.984871 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:22:54.984939 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:22:54.986060 systemd[1]: Starting update-engine.service... Sep 13 00:22:54.987782 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:22:54.990237 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:22:54.991037 jq[1306]: true Sep 13 00:22:54.990497 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:22:54.991615 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:22:54.994992 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:22:55.005346 jq[1314]: true Sep 13 00:22:55.005539 extend-filesystems[1291]: Found loop1 Sep 13 00:22:55.005539 extend-filesystems[1291]: Found vda Sep 13 00:22:55.005539 extend-filesystems[1291]: Found vda1 Sep 13 00:22:55.005539 extend-filesystems[1291]: Found vda2 Sep 13 00:22:55.005539 extend-filesystems[1291]: Found vda3 Sep 13 00:22:55.005539 extend-filesystems[1291]: Found usr Sep 13 00:22:55.005539 extend-filesystems[1291]: Found vda4 Sep 13 00:22:55.005539 extend-filesystems[1291]: Found vda6 Sep 13 00:22:55.005539 extend-filesystems[1291]: Found vda7 Sep 13 00:22:55.005539 extend-filesystems[1291]: Found vda9 Sep 13 00:22:55.005539 extend-filesystems[1291]: Checking size of /dev/vda9 Sep 13 00:22:55.021530 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:22:55.029007 tar[1311]: linux-arm64/helm Sep 13 00:22:55.022564 dbus-daemon[1289]: [system] SELinux support is enabled Sep 13 00:22:55.021760 systemd[1]: Finished motdgen.service. Sep 13 00:22:55.023399 systemd[1]: Started dbus.service. Sep 13 00:22:55.029842 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:22:55.029874 systemd[1]: Reached target system-config.target. Sep 13 00:22:55.030697 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:22:55.030713 systemd[1]: Reached target user-config.target. Sep 13 00:22:55.043717 extend-filesystems[1291]: Resized partition /dev/vda9 Sep 13 00:22:55.045189 extend-filesystems[1346]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:22:55.054780 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:22:55.054807 update_engine[1304]: I0913 00:22:55.052310 1304 main.cc:92] Flatcar Update Engine starting Sep 13 00:22:55.052634 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:22:55.055075 bash[1344]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:22:55.056872 systemd[1]: Started update-engine.service. Sep 13 00:22:55.056969 update_engine[1304]: I0913 00:22:55.056927 1304 update_check_scheduler.cc:74] Next update check in 8m51s Sep 13 00:22:55.061361 systemd[1]: Started locksmithd.service. Sep 13 00:22:55.069442 systemd-logind[1302]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:22:55.069891 systemd-logind[1302]: New seat seat0. Sep 13 00:22:55.072603 systemd[1]: Started systemd-logind.service. Sep 13 00:22:55.086401 env[1315]: time="2025-09-13T00:22:55.085445877Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:22:55.090394 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:22:55.101650 extend-filesystems[1346]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:22:55.101650 extend-filesystems[1346]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:22:55.101650 extend-filesystems[1346]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:22:55.114926 extend-filesystems[1291]: Resized filesystem in /dev/vda9 Sep 13 00:22:55.102545 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:22:55.102776 systemd[1]: Finished extend-filesystems.service. Sep 13 00:22:55.117861 env[1315]: time="2025-09-13T00:22:55.117814437Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:22:55.117988 env[1315]: time="2025-09-13T00:22:55.117970580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:22:55.125452 env[1315]: time="2025-09-13T00:22:55.125404544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:22:55.125452 env[1315]: time="2025-09-13T00:22:55.125444670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:22:55.125727 env[1315]: time="2025-09-13T00:22:55.125702160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:22:55.125727 env[1315]: time="2025-09-13T00:22:55.125724443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:22:55.125791 env[1315]: time="2025-09-13T00:22:55.125739312Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:22:55.125791 env[1315]: time="2025-09-13T00:22:55.125749066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:22:55.125836 env[1315]: time="2025-09-13T00:22:55.125821547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:22:55.126116 env[1315]: time="2025-09-13T00:22:55.126092835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:22:55.126284 env[1315]: time="2025-09-13T00:22:55.126261190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:22:55.126284 env[1315]: time="2025-09-13T00:22:55.126283117Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:22:55.126365 env[1315]: time="2025-09-13T00:22:55.126338548Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:22:55.126417 env[1315]: time="2025-09-13T00:22:55.126355241Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:22:55.131605 env[1315]: time="2025-09-13T00:22:55.131570966Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:22:55.131605 env[1315]: time="2025-09-13T00:22:55.131607841Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:22:55.131693 env[1315]: time="2025-09-13T00:22:55.131622591Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:22:55.131693 env[1315]: time="2025-09-13T00:22:55.131667594Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:22:55.131693 env[1315]: time="2025-09-13T00:22:55.131682066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:22:55.131764 env[1315]: time="2025-09-13T00:22:55.131695389Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:22:55.131764 env[1315]: time="2025-09-13T00:22:55.131708275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:22:55.132234 env[1315]: time="2025-09-13T00:22:55.132209138Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:22:55.132281 env[1315]: time="2025-09-13T00:22:55.132235625Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:22:55.132281 env[1315]: time="2025-09-13T00:22:55.132250375Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:22:55.132281 env[1315]: time="2025-09-13T00:22:55.132263499Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:22:55.132281 env[1315]: time="2025-09-13T00:22:55.132276821Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:22:55.132434 env[1315]: time="2025-09-13T00:22:55.132412584Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:22:55.132560 env[1315]: time="2025-09-13T00:22:55.132538197Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:22:55.132828 env[1315]: time="2025-09-13T00:22:55.132807541Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:22:55.132867 env[1315]: time="2025-09-13T00:22:55.132836446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.132867 env[1315]: time="2025-09-13T00:22:55.132851355Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:22:55.132970 env[1315]: time="2025-09-13T00:22:55.132955001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133002 env[1315]: time="2025-09-13T00:22:55.132972130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133002 env[1315]: time="2025-09-13T00:22:55.132985294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133002 env[1315]: time="2025-09-13T00:22:55.132997149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133064 env[1315]: time="2025-09-13T00:22:55.133009243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133064 env[1315]: time="2025-09-13T00:22:55.133020979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133064 env[1315]: time="2025-09-13T00:22:55.133033271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133064 env[1315]: time="2025-09-13T00:22:55.133044294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133064 env[1315]: time="2025-09-13T00:22:55.133056783Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:22:55.133214 env[1315]: time="2025-09-13T00:22:55.133194687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133247 env[1315]: time="2025-09-13T00:22:55.133223117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133247 env[1315]: time="2025-09-13T00:22:55.133236994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133292 env[1315]: time="2025-09-13T00:22:55.133249643Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:22:55.133292 env[1315]: time="2025-09-13T00:22:55.133263401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:22:55.133292 env[1315]: time="2025-09-13T00:22:55.133273552Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:22:55.133292 env[1315]: time="2025-09-13T00:22:55.133289372Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:22:55.133376 env[1315]: time="2025-09-13T00:22:55.133321925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:22:55.133581 env[1315]: time="2025-09-13T00:22:55.133527909Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:22:55.134476 env[1315]: time="2025-09-13T00:22:55.133586710Z" level=info msg="Connect containerd service" Sep 13 00:22:55.134476 env[1315]: time="2025-09-13T00:22:55.133618986Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:22:55.134203 locksmithd[1348]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:22:55.134747 env[1315]: time="2025-09-13T00:22:55.134482848Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:22:55.134747 env[1315]: time="2025-09-13T00:22:55.134719997Z" level=info msg="Start subscribing containerd event" Sep 13 00:22:55.134833 env[1315]: time="2025-09-13T00:22:55.134784468Z" level=info msg="Start recovering state" Sep 13 00:22:55.134833 env[1315]: time="2025-09-13T00:22:55.134817021Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:22:55.134876 env[1315]: time="2025-09-13T00:22:55.134844062Z" level=info msg="Start event monitor" Sep 13 00:22:55.134876 env[1315]: time="2025-09-13T00:22:55.134854253Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:22:55.134876 env[1315]: time="2025-09-13T00:22:55.134862698Z" level=info msg="Start snapshots syncer" Sep 13 00:22:55.134876 env[1315]: time="2025-09-13T00:22:55.134874197Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:22:55.134950 env[1315]: time="2025-09-13T00:22:55.134897194Z" level=info msg="containerd successfully booted in 0.050314s" Sep 13 00:22:55.134980 systemd[1]: Started containerd.service. Sep 13 00:22:55.135800 env[1315]: time="2025-09-13T00:22:55.134882682Z" level=info msg="Start streaming server" Sep 13 00:22:55.435451 tar[1311]: linux-arm64/LICENSE Sep 13 00:22:55.435678 tar[1311]: linux-arm64/README.md Sep 13 00:22:55.440880 systemd[1]: Finished prepare-helm.service. Sep 13 00:22:55.656543 systemd-networkd[1096]: eth0: Gained IPv6LL Sep 13 00:22:55.658255 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:22:55.659326 systemd[1]: Reached target network-online.target. Sep 13 00:22:55.661845 systemd[1]: Starting kubelet.service... Sep 13 00:22:55.814473 sshd_keygen[1324]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:22:55.843271 systemd[1]: Finished sshd-keygen.service. Sep 13 00:22:55.845965 systemd[1]: Starting issuegen.service... Sep 13 00:22:55.850999 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:22:55.851194 systemd[1]: Finished issuegen.service. Sep 13 00:22:55.853607 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:22:55.859786 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:22:55.861669 systemd[1]: Started getty@tty1.service. Sep 13 00:22:55.863656 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 13 00:22:55.864726 systemd[1]: Reached target getty.target. Sep 13 00:22:56.329903 systemd[1]: Started kubelet.service. Sep 13 00:22:56.332468 systemd[1]: Reached target multi-user.target. Sep 13 00:22:56.337585 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:22:56.346536 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:22:56.346778 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:22:56.347624 systemd[1]: Startup finished in 6.556s (kernel) + 4.790s (userspace) = 11.347s. Sep 13 00:22:56.844336 kubelet[1391]: E0913 00:22:56.844291 1391 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:22:56.846474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:22:56.846624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:22:58.200485 systemd[1]: Created slice system-sshd.slice. Sep 13 00:22:58.204704 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:40552.service. Sep 13 00:22:58.273682 sshd[1402]: Accepted publickey for core from 10.0.0.1 port 40552 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:22:58.275987 sshd[1402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:22:58.289451 systemd[1]: Created slice user-500.slice. Sep 13 00:22:58.290589 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:22:58.295601 systemd-logind[1302]: New session 1 of user core. Sep 13 00:22:58.300600 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:22:58.301736 systemd[1]: Starting user@500.service... Sep 13 00:22:58.304718 (systemd)[1407]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:22:58.371569 systemd[1407]: Queued start job for default target default.target. Sep 13 00:22:58.371777 systemd[1407]: Reached target paths.target. Sep 13 00:22:58.371792 systemd[1407]: Reached target sockets.target. Sep 13 00:22:58.371802 systemd[1407]: Reached target timers.target. Sep 13 00:22:58.371812 systemd[1407]: Reached target basic.target. Sep 13 00:22:58.371861 systemd[1407]: Reached target default.target. Sep 13 00:22:58.371883 systemd[1407]: Startup finished in 60ms. Sep 13 00:22:58.371969 systemd[1]: Started user@500.service. Sep 13 00:22:58.373189 systemd[1]: Started session-1.scope. Sep 13 00:22:58.427144 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:40556.service. Sep 13 00:22:58.471877 sshd[1416]: Accepted publickey for core from 10.0.0.1 port 40556 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:22:58.473010 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:22:58.483290 systemd[1]: Started session-2.scope. Sep 13 00:22:58.483489 systemd-logind[1302]: New session 2 of user core. Sep 13 00:22:58.543194 sshd[1416]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:58.547313 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:40556.service: Deactivated successfully. Sep 13 00:22:58.549276 systemd-logind[1302]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:22:58.550052 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:40562.service. Sep 13 00:22:58.550434 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:22:58.551635 systemd-logind[1302]: Removed session 2. Sep 13 00:22:58.588461 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 40562 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:22:58.590446 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:22:58.594443 systemd-logind[1302]: New session 3 of user core. Sep 13 00:22:58.594932 systemd[1]: Started session-3.scope. Sep 13 00:22:58.646786 sshd[1423]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:58.650359 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:40578.service. Sep 13 00:22:58.655037 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:40562.service: Deactivated successfully. Sep 13 00:22:58.655706 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:22:58.656438 systemd-logind[1302]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:22:58.657293 systemd-logind[1302]: Removed session 3. Sep 13 00:22:58.691332 sshd[1429]: Accepted publickey for core from 10.0.0.1 port 40578 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:22:58.692860 sshd[1429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:22:58.696619 systemd-logind[1302]: New session 4 of user core. Sep 13 00:22:58.697308 systemd[1]: Started session-4.scope. Sep 13 00:22:58.755282 sshd[1429]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:58.757403 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:40594.service. Sep 13 00:22:58.763484 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:40578.service: Deactivated successfully. Sep 13 00:22:58.764452 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:22:58.764751 systemd-logind[1302]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:22:58.765540 systemd-logind[1302]: Removed session 4. Sep 13 00:22:58.793590 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 40594 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:22:58.795200 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:22:58.801599 systemd[1]: Started session-5.scope. Sep 13 00:22:58.801773 systemd-logind[1302]: New session 5 of user core. Sep 13 00:22:58.876216 sudo[1441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:22:58.876441 sudo[1441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:22:58.887232 dbus-daemon[1289]: avc: received setenforce notice (enforcing=1) Sep 13 00:22:58.887534 sudo[1441]: pam_unix(sudo:session): session closed for user root Sep 13 00:22:58.889939 sshd[1435]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:58.892568 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:40608.service. Sep 13 00:22:58.897000 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:40594.service: Deactivated successfully. Sep 13 00:22:58.897884 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:22:58.897899 systemd-logind[1302]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:22:58.898675 systemd-logind[1302]: Removed session 5. Sep 13 00:22:58.927761 sshd[1443]: Accepted publickey for core from 10.0.0.1 port 40608 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:22:58.928960 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:22:58.932704 systemd[1]: Started session-6.scope. Sep 13 00:22:58.933119 systemd-logind[1302]: New session 6 of user core. Sep 13 00:22:58.986838 sudo[1450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:22:58.987353 sudo[1450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:22:58.990161 sudo[1450]: pam_unix(sudo:session): session closed for user root Sep 13 00:22:58.996761 sudo[1449]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:22:58.996980 sudo[1449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:22:59.006889 systemd[1]: Stopping audit-rules.service... Sep 13 00:22:59.007000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:22:59.010329 auditctl[1453]: No rules Sep 13 00:22:59.010521 kernel: kauditd_printk_skb: 114 callbacks suppressed Sep 13 00:22:59.010544 kernel: audit: type=1305 audit(1757722979.007:147): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:22:59.010879 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:22:59.011107 systemd[1]: Stopped audit-rules.service. Sep 13 00:22:59.007000 audit[1453]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdb2ec520 a2=420 a3=0 items=0 ppid=1 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.014586 systemd[1]: Starting audit-rules.service... Sep 13 00:22:59.014778 kernel: audit: type=1300 audit(1757722979.007:147): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdb2ec520 a2=420 a3=0 items=0 ppid=1 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.014806 kernel: audit: type=1327 audit(1757722979.007:147): proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:22:59.007000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:22:59.015584 kernel: audit: type=1131 audit(1757722979.009:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.033672 augenrules[1471]: No rules Sep 13 00:22:59.034448 systemd[1]: Finished audit-rules.service. Sep 13 00:22:59.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.035251 sudo[1449]: pam_unix(sudo:session): session closed for user root Sep 13 00:22:59.033000 audit[1449]: USER_END pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.041414 kernel: audit: type=1130 audit(1757722979.033:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.041472 kernel: audit: type=1106 audit(1757722979.033:150): pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.041493 kernel: audit: type=1104 audit(1757722979.033:151): pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.033000 audit[1449]: CRED_DISP pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.038607 sshd[1443]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:59.039000 audit[1443]: USER_END pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:22:59.050397 kernel: audit: type=1106 audit(1757722979.039:152): pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:22:59.050457 kernel: audit: type=1104 audit(1757722979.040:153): pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:22:59.050475 kernel: audit: type=1130 audit(1757722979.041:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.117:22-10.0.0.1:40620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.040000 audit[1443]: CRED_DISP pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:22:59.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.117:22-10.0.0.1:40620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.043239 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:40620.service. Sep 13 00:22:59.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.117:22-10.0.0.1:40608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.043701 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:40608.service: Deactivated successfully. Sep 13 00:22:59.045644 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:22:59.045677 systemd-logind[1302]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:22:59.050137 systemd-logind[1302]: Removed session 6. Sep 13 00:22:59.077000 audit[1476]: USER_ACCT pid=1476 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:22:59.078726 sshd[1476]: Accepted publickey for core from 10.0.0.1 port 40620 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:22:59.078000 audit[1476]: CRED_ACQ pid=1476 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:22:59.078000 audit[1476]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc1803c20 a2=3 a3=1 items=0 ppid=1 pid=1476 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.078000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:22:59.080201 sshd[1476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:22:59.084211 systemd[1]: Started session-7.scope. Sep 13 00:22:59.084411 systemd-logind[1302]: New session 7 of user core. Sep 13 00:22:59.087000 audit[1476]: USER_START pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:22:59.089000 audit[1481]: CRED_ACQ pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:22:59.138829 sudo[1482]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:22:59.138000 audit[1482]: USER_ACCT pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.138000 audit[1482]: CRED_REFR pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.139038 sudo[1482]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:22:59.141000 audit[1482]: USER_START pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.193336 systemd[1]: Starting docker.service... Sep 13 00:22:59.257415 env[1494]: time="2025-09-13T00:22:59.257296630Z" level=info msg="Starting up" Sep 13 00:22:59.259694 env[1494]: time="2025-09-13T00:22:59.259668017Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:22:59.259694 env[1494]: time="2025-09-13T00:22:59.259691098Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:22:59.259794 env[1494]: time="2025-09-13T00:22:59.259710558Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:22:59.259794 env[1494]: time="2025-09-13T00:22:59.259721661Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:22:59.262292 env[1494]: time="2025-09-13T00:22:59.262225962Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:22:59.262486 env[1494]: time="2025-09-13T00:22:59.262466802Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:22:59.262603 env[1494]: time="2025-09-13T00:22:59.262584436Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:22:59.262708 env[1494]: time="2025-09-13T00:22:59.262693713Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:22:59.470830 env[1494]: time="2025-09-13T00:22:59.470774811Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:22:59.470830 env[1494]: time="2025-09-13T00:22:59.470807801Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:22:59.471036 env[1494]: time="2025-09-13T00:22:59.470945491Z" level=info msg="Loading containers: start." Sep 13 00:22:59.551000 audit[1529]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.551000 audit[1529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffe8b0770 a2=0 a3=1 items=0 ppid=1494 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.551000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 00:22:59.553000 audit[1531]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.553000 audit[1531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc1d89d80 a2=0 a3=1 items=0 ppid=1494 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.553000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 00:22:59.556000 audit[1533]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.556000 audit[1533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcb6a7d90 a2=0 a3=1 items=0 ppid=1494 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.556000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:22:59.557000 audit[1535]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.557000 audit[1535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc8c52a00 a2=0 a3=1 items=0 ppid=1494 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.557000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:22:59.561000 audit[1537]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.561000 audit[1537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffceca17b0 a2=0 a3=1 items=0 ppid=1494 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.561000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 00:22:59.591000 audit[1542]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.591000 audit[1542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffccf8d6e0 a2=0 a3=1 items=0 ppid=1494 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.591000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 00:22:59.604000 audit[1544]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.604000 audit[1544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff0b81d00 a2=0 a3=1 items=0 ppid=1494 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.604000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 00:22:59.607000 audit[1546]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.607000 audit[1546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd85034b0 a2=0 a3=1 items=0 ppid=1494 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.607000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 00:22:59.609000 audit[1548]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.609000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffca137870 a2=0 a3=1 items=0 ppid=1494 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.609000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:22:59.626000 audit[1552]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.626000 audit[1552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc36b2180 a2=0 a3=1 items=0 ppid=1494 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.626000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:22:59.636000 audit[1553]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.636000 audit[1553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdc59d9d0 a2=0 a3=1 items=0 ppid=1494 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.636000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:22:59.649762 kernel: Initializing XFRM netlink socket Sep 13 00:22:59.674454 env[1494]: time="2025-09-13T00:22:59.674416532Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:22:59.692000 audit[1561]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.692000 audit[1561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffc76714d0 a2=0 a3=1 items=0 ppid=1494 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.692000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 00:22:59.717000 audit[1564]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.717000 audit[1564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff2d94db0 a2=0 a3=1 items=0 ppid=1494 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.717000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 00:22:59.721000 audit[1567]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.721000 audit[1567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff03c52a0 a2=0 a3=1 items=0 ppid=1494 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.721000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 00:22:59.723000 audit[1569]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.723000 audit[1569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdde4b310 a2=0 a3=1 items=0 ppid=1494 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.723000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 00:22:59.726000 audit[1571]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.726000 audit[1571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff9b61d10 a2=0 a3=1 items=0 ppid=1494 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.726000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 00:22:59.730000 audit[1573]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.730000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe25a9bd0 a2=0 a3=1 items=0 ppid=1494 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.730000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 00:22:59.732000 audit[1575]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.732000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffee351cf0 a2=0 a3=1 items=0 ppid=1494 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.732000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 00:22:59.743000 audit[1578]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.743000 audit[1578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffff38de9e0 a2=0 a3=1 items=0 ppid=1494 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.743000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 00:22:59.745000 audit[1580]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.745000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffe4d6d890 a2=0 a3=1 items=0 ppid=1494 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.745000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:22:59.746000 audit[1582]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.746000 audit[1582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe2a4f610 a2=0 a3=1 items=0 ppid=1494 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.746000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:22:59.748000 audit[1584]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.748000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd68093c0 a2=0 a3=1 items=0 ppid=1494 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.748000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 00:22:59.750514 systemd-networkd[1096]: docker0: Link UP Sep 13 00:22:59.762000 audit[1588]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.762000 audit[1588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd072a2b0 a2=0 a3=1 items=0 ppid=1494 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.762000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:22:59.776000 audit[1589]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:22:59.776000 audit[1589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc36e83c0 a2=0 a3=1 items=0 ppid=1494 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:22:59.776000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:22:59.777921 env[1494]: time="2025-09-13T00:22:59.777869826Z" level=info msg="Loading containers: done." Sep 13 00:22:59.794594 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3379738865-merged.mount: Deactivated successfully. Sep 13 00:22:59.805366 env[1494]: time="2025-09-13T00:22:59.805247818Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:22:59.805700 env[1494]: time="2025-09-13T00:22:59.805482927Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:22:59.805700 env[1494]: time="2025-09-13T00:22:59.805588105Z" level=info msg="Daemon has completed initialization" Sep 13 00:22:59.829156 systemd[1]: Started docker.service. Sep 13 00:22:59.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:22:59.836903 env[1494]: time="2025-09-13T00:22:59.833650293Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:23:00.457825 env[1315]: time="2025-09-13T00:23:00.457784743Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:23:01.035239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669679736.mount: Deactivated successfully. Sep 13 00:23:02.333664 env[1315]: time="2025-09-13T00:23:02.333606954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:02.335009 env[1315]: time="2025-09-13T00:23:02.334978948Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:02.336770 env[1315]: time="2025-09-13T00:23:02.336740480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:02.338593 env[1315]: time="2025-09-13T00:23:02.338565793Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:02.339409 env[1315]: time="2025-09-13T00:23:02.339350490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:23:02.340721 env[1315]: time="2025-09-13T00:23:02.340680987Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:23:03.651231 env[1315]: time="2025-09-13T00:23:03.651164622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:03.654447 env[1315]: time="2025-09-13T00:23:03.654408370Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:03.656943 env[1315]: time="2025-09-13T00:23:03.656900183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:03.659670 env[1315]: time="2025-09-13T00:23:03.659631674Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:03.660555 env[1315]: time="2025-09-13T00:23:03.660527894Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:23:03.661303 env[1315]: time="2025-09-13T00:23:03.661274524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:23:05.068972 env[1315]: time="2025-09-13T00:23:05.068894173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:05.074150 env[1315]: time="2025-09-13T00:23:05.074111826Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:05.079445 env[1315]: time="2025-09-13T00:23:05.079411411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:05.081830 env[1315]: time="2025-09-13T00:23:05.081800185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:05.082676 env[1315]: time="2025-09-13T00:23:05.082643641Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:23:05.083785 env[1315]: time="2025-09-13T00:23:05.083751687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:23:06.733722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094665429.mount: Deactivated successfully. Sep 13 00:23:07.043365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:23:07.043554 systemd[1]: Stopped kubelet.service. Sep 13 00:23:07.048192 kernel: kauditd_printk_skb: 84 callbacks suppressed Sep 13 00:23:07.048263 kernel: audit: type=1130 audit(1757722987.042:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:07.048288 kernel: audit: type=1131 audit(1757722987.042:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:07.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:07.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:07.045291 systemd[1]: Starting kubelet.service... Sep 13 00:23:07.157882 systemd[1]: Started kubelet.service. Sep 13 00:23:07.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:07.161426 kernel: audit: type=1130 audit(1757722987.156:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:07.198096 kubelet[1633]: E0913 00:23:07.198037 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:23:07.200551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:23:07.200696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:23:07.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:23:07.203391 kernel: audit: type=1131 audit(1757722987.199:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:23:07.406312 env[1315]: time="2025-09-13T00:23:07.405803297Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:07.407551 env[1315]: time="2025-09-13T00:23:07.407511325Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:07.409446 env[1315]: time="2025-09-13T00:23:07.409401391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:07.410978 env[1315]: time="2025-09-13T00:23:07.410939159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:07.411388 env[1315]: time="2025-09-13T00:23:07.411330788Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:23:07.412069 env[1315]: time="2025-09-13T00:23:07.411901222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:23:07.938977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257969345.mount: Deactivated successfully. Sep 13 00:23:08.894001 env[1315]: time="2025-09-13T00:23:08.893955431Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:08.897216 env[1315]: time="2025-09-13T00:23:08.897187205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:08.901146 env[1315]: time="2025-09-13T00:23:08.900687364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:08.902551 env[1315]: time="2025-09-13T00:23:08.902386263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:08.903759 env[1315]: time="2025-09-13T00:23:08.903703071Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:23:08.905583 env[1315]: time="2025-09-13T00:23:08.905489435Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:23:09.373789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321483477.mount: Deactivated successfully. Sep 13 00:23:09.379339 env[1315]: time="2025-09-13T00:23:09.379299314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:09.380953 env[1315]: time="2025-09-13T00:23:09.380928475Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:09.382504 env[1315]: time="2025-09-13T00:23:09.382475627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:09.384436 env[1315]: time="2025-09-13T00:23:09.384362680Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:09.385031 env[1315]: time="2025-09-13T00:23:09.385007769Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:23:09.385695 env[1315]: time="2025-09-13T00:23:09.385667958Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:23:09.871945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70406168.mount: Deactivated successfully. Sep 13 00:23:12.122186 env[1315]: time="2025-09-13T00:23:12.122135532Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:12.123820 env[1315]: time="2025-09-13T00:23:12.123779965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:12.125707 env[1315]: time="2025-09-13T00:23:12.125680088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:12.127582 env[1315]: time="2025-09-13T00:23:12.127560069Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:12.128483 env[1315]: time="2025-09-13T00:23:12.128459296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:23:16.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.253039 systemd[1]: Stopped kubelet.service. Sep 13 00:23:16.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.255090 systemd[1]: Starting kubelet.service... Sep 13 00:23:16.260177 kernel: audit: type=1130 audit(1757722996.251:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.260261 kernel: audit: type=1131 audit(1757722996.251:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.278411 systemd[1]: Reloading. Sep 13 00:23:16.345166 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2025-09-13T00:23:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:23:16.345198 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2025-09-13T00:23:16Z" level=info msg="torcx already run" Sep 13 00:23:16.424894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:23:16.424912 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:23:16.440992 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:23:16.509342 systemd[1]: Started kubelet.service. Sep 13 00:23:16.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.512428 kernel: audit: type=1130 audit(1757722996.508:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.513315 systemd[1]: Stopping kubelet.service... Sep 13 00:23:16.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.513727 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:23:16.513976 systemd[1]: Stopped kubelet.service. Sep 13 00:23:16.515463 systemd[1]: Starting kubelet.service... Sep 13 00:23:16.517406 kernel: audit: type=1131 audit(1757722996.512:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.618935 systemd[1]: Started kubelet.service. Sep 13 00:23:16.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.623416 kernel: audit: type=1130 audit(1757722996.617:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:16.656889 kubelet[1748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:23:16.657249 kubelet[1748]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:23:16.657300 kubelet[1748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:23:16.657474 kubelet[1748]: I0913 00:23:16.657435 1748 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:23:17.298059 kubelet[1748]: I0913 00:23:17.298000 1748 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:23:17.298059 kubelet[1748]: I0913 00:23:17.298034 1748 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:23:17.298330 kubelet[1748]: I0913 00:23:17.298301 1748 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:23:17.348422 kubelet[1748]: E0913 00:23:17.348357 1748 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:17.349389 kubelet[1748]: I0913 00:23:17.349348 1748 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:23:17.357415 kubelet[1748]: E0913 00:23:17.357364 1748 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:23:17.357415 kubelet[1748]: I0913 00:23:17.357413 1748 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:23:17.361582 kubelet[1748]: I0913 00:23:17.360999 1748 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:23:17.362113 kubelet[1748]: I0913 00:23:17.362074 1748 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:23:17.362256 kubelet[1748]: I0913 00:23:17.362223 1748 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:23:17.362421 kubelet[1748]: I0913 00:23:17.362252 1748 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:23:17.362583 kubelet[1748]: I0913 00:23:17.362568 1748 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:23:17.362583 kubelet[1748]: I0913 00:23:17.362580 1748 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:23:17.362829 kubelet[1748]: I0913 00:23:17.362805 1748 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:23:17.367023 kubelet[1748]: I0913 00:23:17.367001 1748 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:23:17.368577 kubelet[1748]: I0913 00:23:17.367037 1748 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:23:17.368577 kubelet[1748]: I0913 00:23:17.367057 1748 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:23:17.368577 kubelet[1748]: I0913 00:23:17.367137 1748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:23:17.385483 kubelet[1748]: I0913 00:23:17.385454 1748 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:23:17.386246 kubelet[1748]: I0913 00:23:17.386228 1748 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:23:17.386424 kubelet[1748]: W0913 00:23:17.386410 1748 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:23:17.387410 kubelet[1748]: I0913 00:23:17.387370 1748 server.go:1274] "Started kubelet" Sep 13 00:23:17.387691 kubelet[1748]: W0913 00:23:17.387579 1748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:23:17.387837 kubelet[1748]: E0913 00:23:17.387813 1748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:17.388414 kubelet[1748]: W0913 00:23:17.388335 1748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:23:17.389002 kubelet[1748]: E0913 00:23:17.388942 1748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:17.389002 kubelet[1748]: I0913 00:23:17.388883 1748 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:23:17.389309 kubelet[1748]: I0913 00:23:17.389236 1748 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:23:17.389309 kubelet[1748]: I0913 00:23:17.388510 1748 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:23:17.389000 audit[1748]: AVC avc: denied { mac_admin } for pid=1748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:17.390255 kubelet[1748]: I0913 00:23:17.390225 1748 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:23:17.390350 kubelet[1748]: I0913 00:23:17.390336 1748 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:23:17.390505 kubelet[1748]: I0913 00:23:17.390490 1748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:23:17.391998 kubelet[1748]: I0913 00:23:17.391959 1748 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:23:17.389000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:23:17.393465 kernel: audit: type=1400 audit(1757722997.389:198): avc: denied { mac_admin } for pid=1748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:17.393520 kernel: audit: type=1401 audit(1757722997.389:198): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:23:17.393552 kernel: audit: type=1300 audit(1757722997.389:198): arch=c00000b7 syscall=5 success=no exit=-22 a0=40009d8210 a1=4000bcc828 a2=40009d81e0 a3=25 items=0 ppid=1 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.389000 audit[1748]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009d8210 a1=4000bcc828 a2=40009d81e0 a3=25 items=0 ppid=1 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.394016 kubelet[1748]: E0913 00:23:17.393940 1748 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:23:17.394483 kubelet[1748]: E0913 00:23:17.390886 1748 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864afc0d8cecf3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:23:17.387349818 +0000 UTC m=+0.765128526,LastTimestamp:2025-09-13 00:23:17.387349818 +0000 UTC m=+0.765128526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:23:17.394815 kubelet[1748]: I0913 00:23:17.394783 1748 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:23:17.394916 kubelet[1748]: W0913 00:23:17.394874 1748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:23:17.394953 kubelet[1748]: E0913 00:23:17.394923 1748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:17.394953 kubelet[1748]: I0913 00:23:17.394901 1748 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:23:17.394953 kubelet[1748]: I0913 00:23:17.394941 1748 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:23:17.395072 kubelet[1748]: I0913 00:23:17.395051 1748 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:23:17.395149 kubelet[1748]: I0913 00:23:17.395128 1748 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:23:17.396226 kernel: audit: type=1327 audit(1757722997.389:198): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:23:17.389000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:23:17.396793 kubelet[1748]: E0913 00:23:17.396760 1748 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:23:17.396898 kubelet[1748]: E0913 00:23:17.396822 1748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Sep 13 00:23:17.397708 kubelet[1748]: I0913 00:23:17.397679 1748 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:23:17.398118 kubelet[1748]: I0913 00:23:17.398099 1748 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:23:17.398856 kernel: audit: type=1400 audit(1757722997.389:199): avc: denied { mac_admin } for pid=1748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:17.389000 audit[1748]: AVC avc: denied { mac_admin } for pid=1748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:17.389000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:23:17.389000 audit[1748]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40003555e0 a1=4000bcc840 a2=40009d82a0 a3=25 items=0 ppid=1 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.389000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:23:17.393000 audit[1761]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:17.393000 audit[1761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe2faa930 a2=0 a3=1 items=0 ppid=1748 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:23:17.394000 audit[1762]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:17.394000 audit[1762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe541500 a2=0 a3=1 items=0 ppid=1748 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:23:17.401000 audit[1764]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:17.401000 audit[1764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd07693f0 a2=0 a3=1 items=0 ppid=1748 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.401000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:23:17.403000 audit[1768]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:17.403000 audit[1768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc0f53400 a2=0 a3=1 items=0 ppid=1748 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:23:17.410000 audit[1771]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1771 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:17.410000 audit[1771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffe28bdf0 a2=0 a3=1 items=0 ppid=1748 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 00:23:17.411554 kubelet[1748]: I0913 00:23:17.411520 1748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:23:17.412000 audit[1773]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:17.412000 audit[1773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcfad5130 a2=0 a3=1 items=0 ppid=1748 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:23:17.412000 audit[1774]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:17.412000 audit[1774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd0393080 a2=0 a3=1 items=0 ppid=1748 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:23:17.413364 kubelet[1748]: I0913 00:23:17.413345 1748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:23:17.413000 audit[1775]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:17.413000 audit[1775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe51e1c40 a2=0 a3=1 items=0 ppid=1748 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:23:17.413756 kubelet[1748]: I0913 00:23:17.413742 1748 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:23:17.413868 kubelet[1748]: I0913 00:23:17.413856 1748 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:23:17.413977 kubelet[1748]: E0913 00:23:17.413959 1748 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:23:17.414000 audit[1776]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:17.414000 audit[1776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe16f62a0 a2=0 a3=1 items=0 ppid=1748 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:23:17.414000 audit[1777]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:17.414000 audit[1777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeecaa280 a2=0 a3=1 items=0 ppid=1748 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:23:17.415000 audit[1778]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:17.415000 audit[1778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffff114d4c0 a2=0 a3=1 items=0 ppid=1748 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:23:17.416000 audit[1779]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:17.416000 audit[1779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd42f8ce0 a2=0 a3=1 items=0 ppid=1748 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:23:17.417091 kubelet[1748]: W0913 00:23:17.417045 1748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:23:17.417216 kubelet[1748]: E0913 00:23:17.417195 1748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:17.420279 kubelet[1748]: I0913 00:23:17.420260 1748 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:23:17.420406 kubelet[1748]: I0913 00:23:17.420392 1748 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:23:17.420488 kubelet[1748]: I0913 00:23:17.420475 1748 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:23:17.422095 kubelet[1748]: I0913 00:23:17.422077 1748 policy_none.go:49] "None policy: Start" Sep 13 00:23:17.422737 kubelet[1748]: I0913 00:23:17.422724 1748 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:23:17.422827 kubelet[1748]: I0913 00:23:17.422816 1748 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:23:17.429425 kubelet[1748]: I0913 00:23:17.428586 1748 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:23:17.429000 audit[1748]: AVC avc: denied { mac_admin } for pid=1748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:17.429000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:23:17.429000 audit[1748]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f1b170 a1=4000687a10 a2=4000f1b140 a3=25 items=0 ppid=1 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:17.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:23:17.429645 kubelet[1748]: I0913 00:23:17.429477 1748 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:23:17.429710 kubelet[1748]: I0913 00:23:17.429692 1748 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:23:17.429753 kubelet[1748]: I0913 00:23:17.429712 1748 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:23:17.430037 kubelet[1748]: I0913 00:23:17.430019 1748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:23:17.431004 kubelet[1748]: E0913 00:23:17.430956 1748 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:23:17.531397 kubelet[1748]: I0913 00:23:17.531334 1748 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:23:17.531929 kubelet[1748]: E0913 00:23:17.531895 1748 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 00:23:17.597659 kubelet[1748]: E0913 00:23:17.597547 1748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Sep 13 00:23:17.696772 kubelet[1748]: I0913 00:23:17.696707 1748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:17.696772 kubelet[1748]: I0913 00:23:17.696761 1748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54b45058d120716f48eeb4861f0c4bac-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"54b45058d120716f48eeb4861f0c4bac\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:23:17.696772 kubelet[1748]: I0913 00:23:17.696782 1748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54b45058d120716f48eeb4861f0c4bac-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"54b45058d120716f48eeb4861f0c4bac\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:23:17.697209 kubelet[1748]: I0913 00:23:17.696799 1748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54b45058d120716f48eeb4861f0c4bac-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"54b45058d120716f48eeb4861f0c4bac\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:23:17.697209 kubelet[1748]: I0913 00:23:17.696824 1748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:17.697209 kubelet[1748]: I0913 00:23:17.696839 1748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:17.697209 kubelet[1748]: I0913 00:23:17.696854 1748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:17.697209 kubelet[1748]: I0913 00:23:17.696871 1748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:17.697351 kubelet[1748]: I0913 00:23:17.696965 1748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:23:17.733949 kubelet[1748]: I0913 00:23:17.733926 1748 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:23:17.734517 kubelet[1748]: E0913 00:23:17.734479 1748 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 00:23:17.820465 kubelet[1748]: E0913 00:23:17.820432 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:17.821431 env[1315]: time="2025-09-13T00:23:17.821212406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:54b45058d120716f48eeb4861f0c4bac,Namespace:kube-system,Attempt:0,}" Sep 13 00:23:17.821733 kubelet[1748]: E0913 00:23:17.821248 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:17.822172 env[1315]: time="2025-09-13T00:23:17.821860785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:23:17.823409 kubelet[1748]: E0913 00:23:17.823364 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:17.824010 env[1315]: time="2025-09-13T00:23:17.823958733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:23:17.999424 kubelet[1748]: E0913 00:23:17.999304 1748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Sep 13 00:23:18.135704 kubelet[1748]: I0913 00:23:18.135675 1748 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:23:18.136336 kubelet[1748]: E0913 00:23:18.136307 1748 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 00:23:18.288264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113137094.mount: Deactivated successfully. Sep 13 00:23:18.292981 env[1315]: time="2025-09-13T00:23:18.292936775Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.294839 env[1315]: time="2025-09-13T00:23:18.294813174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.295670 env[1315]: time="2025-09-13T00:23:18.295645037Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.296721 env[1315]: time="2025-09-13T00:23:18.296698729Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.298132 env[1315]: time="2025-09-13T00:23:18.298107718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.300313 env[1315]: time="2025-09-13T00:23:18.300283516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.302587 env[1315]: time="2025-09-13T00:23:18.302563311Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.304985 env[1315]: time="2025-09-13T00:23:18.304959420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.307335 env[1315]: time="2025-09-13T00:23:18.307299111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.308035 env[1315]: time="2025-09-13T00:23:18.308001227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.308714 env[1315]: time="2025-09-13T00:23:18.308692267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.309356 env[1315]: time="2025-09-13T00:23:18.309332887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:18.331374 env[1315]: time="2025-09-13T00:23:18.331189265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:18.331374 env[1315]: time="2025-09-13T00:23:18.331221892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:18.331374 env[1315]: time="2025-09-13T00:23:18.331231488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:18.331374 env[1315]: time="2025-09-13T00:23:18.331150121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:18.331374 env[1315]: time="2025-09-13T00:23:18.331189065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:18.331374 env[1315]: time="2025-09-13T00:23:18.331198981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:18.332165 env[1315]: time="2025-09-13T00:23:18.331456917Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9858a795fe3e80df6c22fda7037d362f0bbea6c9ca21400156d51c2034ebc403 pid=1801 runtime=io.containerd.runc.v2 Sep 13 00:23:18.332165 env[1315]: time="2025-09-13T00:23:18.331420171Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be47f66619c7be8897f90cf3e4debe23d30b04f504564eba001fa89c172a3dae pid=1800 runtime=io.containerd.runc.v2 Sep 13 00:23:18.341370 env[1315]: time="2025-09-13T00:23:18.338914373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:18.341370 env[1315]: time="2025-09-13T00:23:18.338959675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:18.341370 env[1315]: time="2025-09-13T00:23:18.338970230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:18.341370 env[1315]: time="2025-09-13T00:23:18.339452275Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea1db75289f65ae819d4f6bd6290ee7d0a93fe8e8e5fa9ee7c35162e13f2ac4c pid=1804 runtime=io.containerd.runc.v2 Sep 13 00:23:18.392348 env[1315]: time="2025-09-13T00:23:18.392304486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:54b45058d120716f48eeb4861f0c4bac,Namespace:kube-system,Attempt:0,} returns sandbox id \"9858a795fe3e80df6c22fda7037d362f0bbea6c9ca21400156d51c2034ebc403\"" Sep 13 00:23:18.393598 kubelet[1748]: E0913 00:23:18.393565 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:18.395098 env[1315]: time="2025-09-13T00:23:18.395065566Z" level=info msg="CreateContainer within sandbox \"9858a795fe3e80df6c22fda7037d362f0bbea6c9ca21400156d51c2034ebc403\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:23:18.397604 env[1315]: time="2025-09-13T00:23:18.397565553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"be47f66619c7be8897f90cf3e4debe23d30b04f504564eba001fa89c172a3dae\"" Sep 13 00:23:18.398222 kubelet[1748]: E0913 00:23:18.398192 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:18.399431 env[1315]: time="2025-09-13T00:23:18.399376778Z" level=info msg="CreateContainer within sandbox \"be47f66619c7be8897f90cf3e4debe23d30b04f504564eba001fa89c172a3dae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:23:18.404365 env[1315]: time="2025-09-13T00:23:18.404332729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea1db75289f65ae819d4f6bd6290ee7d0a93fe8e8e5fa9ee7c35162e13f2ac4c\"" Sep 13 00:23:18.405355 kubelet[1748]: E0913 00:23:18.405336 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:18.406770 env[1315]: time="2025-09-13T00:23:18.406738234Z" level=info msg="CreateContainer within sandbox \"ea1db75289f65ae819d4f6bd6290ee7d0a93fe8e8e5fa9ee7c35162e13f2ac4c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:23:18.413107 env[1315]: time="2025-09-13T00:23:18.413070546Z" level=info msg="CreateContainer within sandbox \"9858a795fe3e80df6c22fda7037d362f0bbea6c9ca21400156d51c2034ebc403\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68d20952e20258d8b37ea066a2282c1e780cf0cc36acb19f44541ec9a16923cf\"" Sep 13 00:23:18.413707 env[1315]: time="2025-09-13T00:23:18.413641115Z" level=info msg="StartContainer for \"68d20952e20258d8b37ea066a2282c1e780cf0cc36acb19f44541ec9a16923cf\"" Sep 13 00:23:18.415804 env[1315]: time="2025-09-13T00:23:18.415772970Z" level=info msg="CreateContainer within sandbox \"be47f66619c7be8897f90cf3e4debe23d30b04f504564eba001fa89c172a3dae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d993173672045512006c89b62e6283ff5af7684fc2ee6b61b0eb7b7f0d29a9d3\"" Sep 13 00:23:18.416507 env[1315]: time="2025-09-13T00:23:18.416477725Z" level=info msg="StartContainer for \"d993173672045512006c89b62e6283ff5af7684fc2ee6b61b0eb7b7f0d29a9d3\"" Sep 13 00:23:18.421780 env[1315]: time="2025-09-13T00:23:18.421748348Z" level=info msg="CreateContainer within sandbox \"ea1db75289f65ae819d4f6bd6290ee7d0a93fe8e8e5fa9ee7c35162e13f2ac4c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d4ccfce1506a7bddcc644e268762eddba8bb3486c892527d943da38fb04c9ac\"" Sep 13 00:23:18.422197 env[1315]: time="2025-09-13T00:23:18.422173615Z" level=info msg="StartContainer for \"4d4ccfce1506a7bddcc644e268762eddba8bb3486c892527d943da38fb04c9ac\"" Sep 13 00:23:18.488262 env[1315]: time="2025-09-13T00:23:18.488217238Z" level=info msg="StartContainer for \"68d20952e20258d8b37ea066a2282c1e780cf0cc36acb19f44541ec9a16923cf\" returns successfully" Sep 13 00:23:18.505633 env[1315]: time="2025-09-13T00:23:18.505582357Z" level=info msg="StartContainer for \"d993173672045512006c89b62e6283ff5af7684fc2ee6b61b0eb7b7f0d29a9d3\" returns successfully" Sep 13 00:23:18.507677 env[1315]: time="2025-09-13T00:23:18.507618571Z" level=info msg="StartContainer for \"4d4ccfce1506a7bddcc644e268762eddba8bb3486c892527d943da38fb04c9ac\" returns successfully" Sep 13 00:23:18.938427 kubelet[1748]: I0913 00:23:18.938197 1748 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:23:19.424361 kubelet[1748]: E0913 00:23:19.424327 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:19.427036 kubelet[1748]: E0913 00:23:19.427008 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:19.428948 kubelet[1748]: E0913 00:23:19.428919 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:19.917108 kubelet[1748]: E0913 00:23:19.917069 1748 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:23:20.022127 kubelet[1748]: I0913 00:23:20.022089 1748 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:23:20.022127 kubelet[1748]: E0913 00:23:20.022129 1748 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:23:20.046596 kubelet[1748]: E0913 00:23:20.046495 1748 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864afc0d8cecf3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:23:17.387349818 +0000 UTC m=+0.765128526,LastTimestamp:2025-09-13 00:23:17.387349818 +0000 UTC m=+0.765128526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:23:20.371032 kubelet[1748]: I0913 00:23:20.370990 1748 apiserver.go:52] "Watching apiserver" Sep 13 00:23:20.395741 kubelet[1748]: I0913 00:23:20.395715 1748 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:23:20.434873 kubelet[1748]: E0913 00:23:20.434828 1748 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:23:20.434873 kubelet[1748]: E0913 00:23:20.434850 1748 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:23:20.435016 kubelet[1748]: E0913 00:23:20.435007 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:20.435042 kubelet[1748]: E0913 00:23:20.435016 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:20.435259 kubelet[1748]: E0913 00:23:20.435224 1748 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:20.435399 kubelet[1748]: E0913 00:23:20.435371 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:21.436064 kubelet[1748]: E0913 00:23:21.436025 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:21.437244 kubelet[1748]: E0913 00:23:21.437203 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:21.437419 kubelet[1748]: E0913 00:23:21.437400 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:22.084598 systemd[1]: Reloading. Sep 13 00:23:22.124522 /usr/lib/systemd/system-generators/torcx-generator[2043]: time="2025-09-13T00:23:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:23:22.124550 /usr/lib/systemd/system-generators/torcx-generator[2043]: time="2025-09-13T00:23:22Z" level=info msg="torcx already run" Sep 13 00:23:22.188911 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:23:22.188931 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:23:22.204476 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:23:22.296154 systemd[1]: Stopping kubelet.service... Sep 13 00:23:22.319768 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:23:22.320073 systemd[1]: Stopped kubelet.service. Sep 13 00:23:22.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:22.320735 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 13 00:23:22.320786 kernel: audit: type=1131 audit(1757723002.318:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:22.322718 systemd[1]: Starting kubelet.service... Sep 13 00:23:22.427289 systemd[1]: Started kubelet.service. Sep 13 00:23:22.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:22.430430 kernel: audit: type=1130 audit(1757723002.426:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:22.508474 kubelet[2096]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:23:22.508474 kubelet[2096]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:23:22.508474 kubelet[2096]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:23:22.508868 kubelet[2096]: I0913 00:23:22.508516 2096 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:23:22.513840 kubelet[2096]: I0913 00:23:22.513805 2096 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:23:22.513947 kubelet[2096]: I0913 00:23:22.513936 2096 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:23:22.514273 kubelet[2096]: I0913 00:23:22.514252 2096 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:23:22.515876 kubelet[2096]: I0913 00:23:22.515850 2096 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:23:22.518118 kubelet[2096]: I0913 00:23:22.518075 2096 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:23:22.522188 kubelet[2096]: E0913 00:23:22.522154 2096 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:23:22.522188 kubelet[2096]: I0913 00:23:22.522180 2096 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:23:22.524674 kubelet[2096]: I0913 00:23:22.524641 2096 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:23:22.525059 kubelet[2096]: I0913 00:23:22.525034 2096 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:23:22.525189 kubelet[2096]: I0913 00:23:22.525165 2096 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:23:22.525371 kubelet[2096]: I0913 00:23:22.525192 2096 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:23:22.525460 kubelet[2096]: I0913 00:23:22.525403 2096 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:23:22.525460 kubelet[2096]: I0913 00:23:22.525413 2096 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:23:22.525460 kubelet[2096]: I0913 00:23:22.525448 2096 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:23:22.525546 kubelet[2096]: I0913 00:23:22.525536 2096 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:23:22.525572 kubelet[2096]: I0913 00:23:22.525550 2096 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:23:22.525572 kubelet[2096]: I0913 00:23:22.525568 2096 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:23:22.525627 kubelet[2096]: I0913 00:23:22.525581 2096 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:23:22.526287 kubelet[2096]: I0913 00:23:22.526267 2096 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:23:22.526825 kubelet[2096]: I0913 00:23:22.526808 2096 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:23:22.527247 kubelet[2096]: I0913 00:23:22.527205 2096 server.go:1274] "Started kubelet" Sep 13 00:23:22.527000 audit[2096]: AVC avc: denied { mac_admin } for pid=2096 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:22.529565 kubelet[2096]: I0913 00:23:22.529517 2096 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:23:22.529988 kubelet[2096]: I0913 00:23:22.529959 2096 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:23:22.530167 kubelet[2096]: I0913 00:23:22.530132 2096 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:23:22.531189 kubelet[2096]: I0913 00:23:22.531169 2096 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:23:22.527000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:23:22.532723 kernel: audit: type=1400 audit(1757723002.527:215): avc: denied { mac_admin } for pid=2096 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:22.532778 kernel: audit: type=1401 audit(1757723002.527:215): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:23:22.532795 kernel: audit: type=1300 audit(1757723002.527:215): arch=c00000b7 syscall=5 success=no exit=-22 a0=40009eba40 a1=40006e0a98 a2=40009eba10 a3=25 items=0 ppid=1 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:22.527000 audit[2096]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009eba40 a1=40006e0a98 a2=40009eba10 a3=25 items=0 ppid=1 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:22.534739 kubelet[2096]: E0913 00:23:22.534709 2096 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:23:22.535458 kubelet[2096]: I0913 00:23:22.535425 2096 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:23:22.535532 kubelet[2096]: I0913 00:23:22.535482 2096 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:23:22.535532 kubelet[2096]: I0913 00:23:22.535508 2096 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:23:22.527000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:23:22.538411 kubelet[2096]: I0913 00:23:22.536058 2096 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:23:22.538513 kernel: audit: type=1327 audit(1757723002.527:215): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:23:22.538542 kernel: audit: type=1400 audit(1757723002.534:216): avc: denied { mac_admin } for pid=2096 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:22.534000 audit[2096]: AVC avc: denied { mac_admin } for pid=2096 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:22.540672 kernel: audit: type=1401 audit(1757723002.534:216): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:23:22.534000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:23:22.534000 audit[2096]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000991020 a1=40006e0ab0 a2=40009ebad0 a3=25 items=0 ppid=1 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:22.543457 kubelet[2096]: I0913 00:23:22.543432 2096 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:23:22.544269 kernel: audit: type=1300 audit(1757723002.534:216): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000991020 a1=40006e0ab0 a2=40009ebad0 a3=25 items=0 ppid=1 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:22.544314 kernel: audit: type=1327 audit(1757723002.534:216): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:23:22.534000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:23:22.548227 kubelet[2096]: E0913 00:23:22.548197 2096 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:23:22.555143 kubelet[2096]: I0913 00:23:22.554537 2096 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:23:22.555143 kubelet[2096]: I0913 00:23:22.554859 2096 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:23:22.556800 kubelet[2096]: I0913 00:23:22.556761 2096 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:23:22.556880 kubelet[2096]: I0913 00:23:22.556843 2096 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:23:22.561537 kubelet[2096]: I0913 00:23:22.561260 2096 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:23:22.565562 kubelet[2096]: I0913 00:23:22.565528 2096 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:23:22.566492 kubelet[2096]: I0913 00:23:22.566475 2096 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:23:22.566619 kubelet[2096]: I0913 00:23:22.566607 2096 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:23:22.566704 kubelet[2096]: I0913 00:23:22.566693 2096 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:23:22.566810 kubelet[2096]: E0913 00:23:22.566792 2096 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:23:22.604973 kubelet[2096]: I0913 00:23:22.604947 2096 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:23:22.605129 kubelet[2096]: I0913 00:23:22.605115 2096 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:23:22.605192 kubelet[2096]: I0913 00:23:22.605183 2096 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:23:22.605372 kubelet[2096]: I0913 00:23:22.605358 2096 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:23:22.605474 kubelet[2096]: I0913 00:23:22.605449 2096 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:23:22.605526 kubelet[2096]: I0913 00:23:22.605518 2096 policy_none.go:49] "None policy: Start" Sep 13 00:23:22.606238 kubelet[2096]: I0913 00:23:22.606220 2096 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:23:22.606334 kubelet[2096]: I0913 00:23:22.606324 2096 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:23:22.606593 kubelet[2096]: I0913 00:23:22.606578 2096 state_mem.go:75] "Updated machine memory state" Sep 13 00:23:22.607990 kubelet[2096]: I0913 00:23:22.607966 2096 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:23:22.606000 audit[2096]: AVC avc: denied { mac_admin } for pid=2096 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:22.606000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:23:22.606000 audit[2096]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e67560 a1=4000e7c348 a2=4000e67530 a3=25 items=0 ppid=1 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:22.606000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:23:22.608317 kubelet[2096]: I0913 00:23:22.608113 2096 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:23:22.608583 kubelet[2096]: I0913 00:23:22.608570 2096 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:23:22.608698 kubelet[2096]: I0913 00:23:22.608661 2096 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:23:22.609591 kubelet[2096]: I0913 00:23:22.609495 2096 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:23:22.672920 kubelet[2096]: E0913 00:23:22.672874 2096 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:23:22.673959 kubelet[2096]: E0913 00:23:22.673934 2096 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:23:22.674489 kubelet[2096]: E0913 00:23:22.674467 2096 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:22.714420 kubelet[2096]: I0913 00:23:22.712702 2096 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:23:22.720534 kubelet[2096]: I0913 00:23:22.720501 2096 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:23:22.720589 kubelet[2096]: I0913 00:23:22.720574 2096 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:23:22.856676 kubelet[2096]: I0913 00:23:22.856639 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:22.856885 kubelet[2096]: I0913 00:23:22.856867 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54b45058d120716f48eeb4861f0c4bac-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"54b45058d120716f48eeb4861f0c4bac\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:23:22.856965 kubelet[2096]: I0913 00:23:22.856952 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54b45058d120716f48eeb4861f0c4bac-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"54b45058d120716f48eeb4861f0c4bac\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:23:22.857040 kubelet[2096]: I0913 00:23:22.857027 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54b45058d120716f48eeb4861f0c4bac-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"54b45058d120716f48eeb4861f0c4bac\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:23:22.857115 kubelet[2096]: I0913 00:23:22.857103 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:22.857187 kubelet[2096]: I0913 00:23:22.857174 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:22.857269 kubelet[2096]: I0913 00:23:22.857256 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:22.857347 kubelet[2096]: I0913 00:23:22.857335 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:23:22.857453 kubelet[2096]: I0913 00:23:22.857440 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:23:22.974265 kubelet[2096]: E0913 00:23:22.973767 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:22.974937 kubelet[2096]: E0913 00:23:22.974901 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:22.975028 kubelet[2096]: E0913 00:23:22.974910 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:23.526695 kubelet[2096]: I0913 00:23:23.526526 2096 apiserver.go:52] "Watching apiserver" Sep 13 00:23:23.555502 kubelet[2096]: I0913 00:23:23.555465 2096 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:23:23.578171 kubelet[2096]: E0913 00:23:23.578140 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:23.578435 kubelet[2096]: E0913 00:23:23.578401 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:23.582785 kubelet[2096]: E0913 00:23:23.582754 2096 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:23:23.582905 kubelet[2096]: E0913 00:23:23.582896 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:23.594511 kubelet[2096]: I0913 00:23:23.594456 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.594422177 podStartE2EDuration="2.594422177s" podCreationTimestamp="2025-09-13 00:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:23:23.594067091 +0000 UTC m=+1.161467430" watchObservedRunningTime="2025-09-13 00:23:23.594422177 +0000 UTC m=+1.161822516" Sep 13 00:23:23.601209 kubelet[2096]: I0913 00:23:23.601148 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.601133742 podStartE2EDuration="2.601133742s" podCreationTimestamp="2025-09-13 00:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:23:23.601100709 +0000 UTC m=+1.168501048" watchObservedRunningTime="2025-09-13 00:23:23.601133742 +0000 UTC m=+1.168534081" Sep 13 00:23:23.609145 kubelet[2096]: I0913 00:23:23.609091 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.60907557 podStartE2EDuration="2.60907557s" podCreationTimestamp="2025-09-13 00:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:23:23.608899367 +0000 UTC m=+1.176299666" watchObservedRunningTime="2025-09-13 00:23:23.60907557 +0000 UTC m=+1.176475909" Sep 13 00:23:24.579802 kubelet[2096]: E0913 00:23:24.579771 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:26.641788 kubelet[2096]: E0913 00:23:26.641739 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:26.890254 kubelet[2096]: E0913 00:23:26.890215 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:27.014669 kubelet[2096]: I0913 00:23:27.014554 2096 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:23:27.015019 env[1315]: time="2025-09-13T00:23:27.014866295Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:23:27.015284 kubelet[2096]: I0913 00:23:27.015063 2096 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:23:27.998068 kubelet[2096]: I0913 00:23:27.998027 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3d58396-f1bf-4a92-bea3-07566f6b80f4-lib-modules\") pod \"kube-proxy-6h66d\" (UID: \"b3d58396-f1bf-4a92-bea3-07566f6b80f4\") " pod="kube-system/kube-proxy-6h66d" Sep 13 00:23:27.998068 kubelet[2096]: I0913 00:23:27.998074 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpslr\" (UniqueName: \"kubernetes.io/projected/b3d58396-f1bf-4a92-bea3-07566f6b80f4-kube-api-access-vpslr\") pod \"kube-proxy-6h66d\" (UID: \"b3d58396-f1bf-4a92-bea3-07566f6b80f4\") " pod="kube-system/kube-proxy-6h66d" Sep 13 00:23:27.998534 kubelet[2096]: I0913 00:23:27.998098 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3d58396-f1bf-4a92-bea3-07566f6b80f4-kube-proxy\") pod \"kube-proxy-6h66d\" (UID: \"b3d58396-f1bf-4a92-bea3-07566f6b80f4\") " pod="kube-system/kube-proxy-6h66d" Sep 13 00:23:27.998534 kubelet[2096]: I0913 00:23:27.998116 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3d58396-f1bf-4a92-bea3-07566f6b80f4-xtables-lock\") pod \"kube-proxy-6h66d\" (UID: \"b3d58396-f1bf-4a92-bea3-07566f6b80f4\") " pod="kube-system/kube-proxy-6h66d" Sep 13 00:23:28.106433 kubelet[2096]: I0913 00:23:28.106376 2096 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:23:28.199821 kubelet[2096]: I0913 00:23:28.199782 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4n6b\" (UniqueName: \"kubernetes.io/projected/2cbb0ae3-9744-4e5a-8ccc-670a11338b17-kube-api-access-n4n6b\") pod \"tigera-operator-58fc44c59b-f4qbf\" (UID: \"2cbb0ae3-9744-4e5a-8ccc-670a11338b17\") " pod="tigera-operator/tigera-operator-58fc44c59b-f4qbf" Sep 13 00:23:28.200026 kubelet[2096]: I0913 00:23:28.200008 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2cbb0ae3-9744-4e5a-8ccc-670a11338b17-var-lib-calico\") pod \"tigera-operator-58fc44c59b-f4qbf\" (UID: \"2cbb0ae3-9744-4e5a-8ccc-670a11338b17\") " pod="tigera-operator/tigera-operator-58fc44c59b-f4qbf" Sep 13 00:23:28.279621 kubelet[2096]: E0913 00:23:28.279506 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:28.280288 env[1315]: time="2025-09-13T00:23:28.280248083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6h66d,Uid:b3d58396-f1bf-4a92-bea3-07566f6b80f4,Namespace:kube-system,Attempt:0,}" Sep 13 00:23:28.297423 env[1315]: time="2025-09-13T00:23:28.297335391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:28.297423 env[1315]: time="2025-09-13T00:23:28.297375790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:28.297645 env[1315]: time="2025-09-13T00:23:28.297414508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:28.297771 env[1315]: time="2025-09-13T00:23:28.297706738Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/721d8fbea684d53bc9ccabe4404a2c00552b9de6e93da80b8b06ca09daee53d1 pid=2154 runtime=io.containerd.runc.v2 Sep 13 00:23:28.338291 env[1315]: time="2025-09-13T00:23:28.338233181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6h66d,Uid:b3d58396-f1bf-4a92-bea3-07566f6b80f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"721d8fbea684d53bc9ccabe4404a2c00552b9de6e93da80b8b06ca09daee53d1\"" Sep 13 00:23:28.339118 kubelet[2096]: E0913 00:23:28.339090 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:28.343668 env[1315]: time="2025-09-13T00:23:28.343521004Z" level=info msg="CreateContainer within sandbox \"721d8fbea684d53bc9ccabe4404a2c00552b9de6e93da80b8b06ca09daee53d1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:23:28.355932 env[1315]: time="2025-09-13T00:23:28.355892709Z" level=info msg="CreateContainer within sandbox \"721d8fbea684d53bc9ccabe4404a2c00552b9de6e93da80b8b06ca09daee53d1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac624a848706fbf3b8087e9986324d3b9f9951660691957ee2c4906375d1b1ce\"" Sep 13 00:23:28.357663 env[1315]: time="2025-09-13T00:23:28.357576133Z" level=info msg="StartContainer for \"ac624a848706fbf3b8087e9986324d3b9f9951660691957ee2c4906375d1b1ce\"" Sep 13 00:23:28.407661 env[1315]: time="2025-09-13T00:23:28.407617897Z" level=info msg="StartContainer for \"ac624a848706fbf3b8087e9986324d3b9f9951660691957ee2c4906375d1b1ce\" returns successfully" Sep 13 00:23:28.442365 env[1315]: time="2025-09-13T00:23:28.441795832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-f4qbf,Uid:2cbb0ae3-9744-4e5a-8ccc-670a11338b17,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:23:28.472082 env[1315]: time="2025-09-13T00:23:28.472020660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:28.472082 env[1315]: time="2025-09-13T00:23:28.472058858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:28.472082 env[1315]: time="2025-09-13T00:23:28.472068978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:28.472683 env[1315]: time="2025-09-13T00:23:28.472617160Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4abf49237c36c28e3983806a36785b9452f9e66d63d4c999ed3f340810daf9cd pid=2236 runtime=io.containerd.runc.v2 Sep 13 00:23:28.514306 env[1315]: time="2025-09-13T00:23:28.514249605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-f4qbf,Uid:2cbb0ae3-9744-4e5a-8ccc-670a11338b17,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4abf49237c36c28e3983806a36785b9452f9e66d63d4c999ed3f340810daf9cd\"" Sep 13 00:23:28.516099 env[1315]: time="2025-09-13T00:23:28.516055465Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:23:28.585000 audit[2299]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.587741 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:23:28.587798 kernel: audit: type=1325 audit(1757723008.585:218): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.585000 audit[2299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc21cd6f0 a2=0 a3=1 items=0 ppid=2207 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.592597 kernel: audit: type=1300 audit(1757723008.585:218): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc21cd6f0 a2=0 a3=1 items=0 ppid=2207 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.592691 kernel: audit: type=1327 audit(1757723008.585:218): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:23:28.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:23:28.586000 audit[2297]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.595816 kernel: audit: type=1325 audit(1757723008.586:219): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.586000 audit[2297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc9387a0 a2=0 a3=1 items=0 ppid=2207 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.597951 kubelet[2096]: E0913 00:23:28.597903 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:28.599417 kernel: audit: type=1300 audit(1757723008.586:219): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc9387a0 a2=0 a3=1 items=0 ppid=2207 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.599503 kernel: audit: type=1327 audit(1757723008.586:219): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:23:28.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:23:28.589000 audit[2303]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.602837 kernel: audit: type=1325 audit(1757723008.589:220): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.589000 audit[2303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2a43710 a2=0 a3=1 items=0 ppid=2207 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.607599 kernel: audit: type=1300 audit(1757723008.589:220): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2a43710 a2=0 a3=1 items=0 ppid=2207 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:23:28.609336 kernel: audit: type=1327 audit(1757723008.589:220): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:23:28.609401 kernel: audit: type=1325 audit(1757723008.591:221): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.591000 audit[2301]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.591000 audit[2301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5236930 a2=0 a3=1 items=0 ppid=2207 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:23:28.592000 audit[2304]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.592000 audit[2304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8647480 a2=0 a3=1 items=0 ppid=2207 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:23:28.592000 audit[2305]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.592000 audit[2305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff930290 a2=0 a3=1 items=0 ppid=2207 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:23:28.686000 audit[2306]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.686000 audit[2306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff3f7e9f0 a2=0 a3=1 items=0 ppid=2207 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:23:28.689000 audit[2308]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.689000 audit[2308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd025bc10 a2=0 a3=1 items=0 ppid=2207 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 00:23:28.692000 audit[2311]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.692000 audit[2311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff6f66700 a2=0 a3=1 items=0 ppid=2207 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 00:23:28.693000 audit[2312]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.693000 audit[2312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb30e0d0 a2=0 a3=1 items=0 ppid=2207 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:23:28.696000 audit[2314]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.696000 audit[2314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff4e64aa0 a2=0 a3=1 items=0 ppid=2207 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:23:28.697000 audit[2315]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.697000 audit[2315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc843f8f0 a2=0 a3=1 items=0 ppid=2207 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:23:28.699000 audit[2317]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.699000 audit[2317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0202600 a2=0 a3=1 items=0 ppid=2207 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:23:28.703000 audit[2320]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.703000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc7522d90 a2=0 a3=1 items=0 ppid=2207 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 00:23:28.704000 audit[2321]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.704000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3f056a0 a2=0 a3=1 items=0 ppid=2207 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:23:28.706000 audit[2323]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.706000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff959d220 a2=0 a3=1 items=0 ppid=2207 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:23:28.708000 audit[2324]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.708000 audit[2324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff9307d80 a2=0 a3=1 items=0 ppid=2207 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:23:28.710000 audit[2326]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.710000 audit[2326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe3b7a760 a2=0 a3=1 items=0 ppid=2207 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:23:28.713000 audit[2329]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.713000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdf2f3560 a2=0 a3=1 items=0 ppid=2207 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:23:28.717000 audit[2332]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.717000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffa4b2a40 a2=0 a3=1 items=0 ppid=2207 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:23:28.717000 audit[2333]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.717000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd2d8cbf0 a2=0 a3=1 items=0 ppid=2207 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:23:28.720000 audit[2335]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.720000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe4073820 a2=0 a3=1 items=0 ppid=2207 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:23:28.726000 audit[2338]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.726000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8830900 a2=0 a3=1 items=0 ppid=2207 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:23:28.727000 audit[2339]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.727000 audit[2339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe5c54950 a2=0 a3=1 items=0 ppid=2207 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:23:28.729000 audit[2341]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:23:28.729000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff1fd3e60 a2=0 a3=1 items=0 ppid=2207 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:23:28.752000 audit[2347]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:28.752000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe6838280 a2=0 a3=1 items=0 ppid=2207 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.752000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:28.761000 audit[2347]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:28.761000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe6838280 a2=0 a3=1 items=0 ppid=2207 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:28.762000 audit[2352]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.762000 audit[2352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd6179580 a2=0 a3=1 items=0 ppid=2207 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.762000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:23:28.765000 audit[2354]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.765000 audit[2354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc378f360 a2=0 a3=1 items=0 ppid=2207 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 00:23:28.768000 audit[2357]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.768000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc844da80 a2=0 a3=1 items=0 ppid=2207 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.768000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 00:23:28.769000 audit[2358]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.769000 audit[2358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc30c31c0 a2=0 a3=1 items=0 ppid=2207 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:23:28.772000 audit[2360]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.772000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8f8b8f0 a2=0 a3=1 items=0 ppid=2207 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:23:28.773000 audit[2361]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.773000 audit[2361]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd97b0b0 a2=0 a3=1 items=0 ppid=2207 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:23:28.775000 audit[2363]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.775000 audit[2363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffc1ac010 a2=0 a3=1 items=0 ppid=2207 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 00:23:28.778000 audit[2366]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.778000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff14b1920 a2=0 a3=1 items=0 ppid=2207 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:23:28.780000 audit[2367]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.780000 audit[2367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2dfe9f0 a2=0 a3=1 items=0 ppid=2207 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:23:28.782000 audit[2369]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.782000 audit[2369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffa46d010 a2=0 a3=1 items=0 ppid=2207 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:23:28.783000 audit[2370]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.783000 audit[2370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff740bd30 a2=0 a3=1 items=0 ppid=2207 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:23:28.786000 audit[2372]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.786000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdf4fbe20 a2=0 a3=1 items=0 ppid=2207 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:23:28.789000 audit[2375]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.789000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe0c3b360 a2=0 a3=1 items=0 ppid=2207 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.789000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:23:28.792000 audit[2378]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.792000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff51f84e0 a2=0 a3=1 items=0 ppid=2207 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 00:23:28.793000 audit[2379]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.793000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc82d7790 a2=0 a3=1 items=0 ppid=2207 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:23:28.795000 audit[2381]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.795000 audit[2381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffda68a200 a2=0 a3=1 items=0 ppid=2207 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:23:28.798000 audit[2384]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.798000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc137ec10 a2=0 a3=1 items=0 ppid=2207 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:23:28.799000 audit[2385]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.799000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe70ad7e0 a2=0 a3=1 items=0 ppid=2207 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:23:28.802000 audit[2387]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.802000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdeb71410 a2=0 a3=1 items=0 ppid=2207 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:23:28.803000 audit[2388]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.803000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1183950 a2=0 a3=1 items=0 ppid=2207 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.803000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:23:28.805000 audit[2390]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.805000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff9847ae0 a2=0 a3=1 items=0 ppid=2207 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:23:28.809000 audit[2393]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:23:28.809000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffc247b60 a2=0 a3=1 items=0 ppid=2207 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:23:28.812000 audit[2395]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:23:28.812000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff82e2d30 a2=0 a3=1 items=0 ppid=2207 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.812000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:28.812000 audit[2395]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:23:28.812000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff82e2d30 a2=0 a3=1 items=0 ppid=2207 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:28.812000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:30.027150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334312979.mount: Deactivated successfully. Sep 13 00:23:30.577457 env[1315]: time="2025-09-13T00:23:30.577416075Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:30.578780 env[1315]: time="2025-09-13T00:23:30.578747635Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:30.580086 env[1315]: time="2025-09-13T00:23:30.580052916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:30.581604 env[1315]: time="2025-09-13T00:23:30.581569391Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:30.582777 env[1315]: time="2025-09-13T00:23:30.582747196Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 00:23:30.585436 env[1315]: time="2025-09-13T00:23:30.585376477Z" level=info msg="CreateContainer within sandbox \"4abf49237c36c28e3983806a36785b9452f9e66d63d4c999ed3f340810daf9cd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:23:30.596602 env[1315]: time="2025-09-13T00:23:30.596555983Z" level=info msg="CreateContainer within sandbox \"4abf49237c36c28e3983806a36785b9452f9e66d63d4c999ed3f340810daf9cd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d12f30feba77a197fb5571734f7142cecfa0c84b5c603273869b8ebe122c37ad\"" Sep 13 00:23:30.598211 env[1315]: time="2025-09-13T00:23:30.597066968Z" level=info msg="StartContainer for \"d12f30feba77a197fb5571734f7142cecfa0c84b5c603273869b8ebe122c37ad\"" Sep 13 00:23:30.654130 env[1315]: time="2025-09-13T00:23:30.651511741Z" level=info msg="StartContainer for \"d12f30feba77a197fb5571734f7142cecfa0c84b5c603273869b8ebe122c37ad\" returns successfully" Sep 13 00:23:30.707098 kubelet[2096]: E0913 00:23:30.707065 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:30.720636 kubelet[2096]: I0913 00:23:30.720569 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6h66d" podStartSLOduration=3.720552597 podStartE2EDuration="3.720552597s" podCreationTimestamp="2025-09-13 00:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:23:28.607673436 +0000 UTC m=+6.175073775" watchObservedRunningTime="2025-09-13 00:23:30.720552597 +0000 UTC m=+8.287952936" Sep 13 00:23:31.605488 kubelet[2096]: E0913 00:23:31.605425 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:31.613873 kubelet[2096]: I0913 00:23:31.613687 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-f4qbf" podStartSLOduration=1.545907486 podStartE2EDuration="3.613671943s" podCreationTimestamp="2025-09-13 00:23:28 +0000 UTC" firstStartedPulling="2025-09-13 00:23:28.515639639 +0000 UTC m=+6.083039938" lastFinishedPulling="2025-09-13 00:23:30.583404056 +0000 UTC m=+8.150804395" observedRunningTime="2025-09-13 00:23:31.613478949 +0000 UTC m=+9.180879288" watchObservedRunningTime="2025-09-13 00:23:31.613671943 +0000 UTC m=+9.181072282" Sep 13 00:23:35.983007 sudo[1482]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:35.982000 audit[1482]: USER_END pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:23:35.983862 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 00:23:35.983930 kernel: audit: type=1106 audit(1757723015.982:269): pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:23:35.985639 sshd[1476]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:35.982000 audit[1482]: CRED_DISP pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:23:35.988710 kernel: audit: type=1104 audit(1757723015.982:270): pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:23:35.988000 audit[1476]: USER_END pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:23:35.990513 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:40620.service: Deactivated successfully. Sep 13 00:23:35.992106 kernel: audit: type=1106 audit(1757723015.988:271): pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:23:35.992133 kernel: audit: type=1104 audit(1757723015.988:272): pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:23:35.988000 audit[1476]: CRED_DISP pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:23:35.991248 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:23:35.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.117:22-10.0.0.1:40620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:35.996762 kernel: audit: type=1131 audit(1757723015.988:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.117:22-10.0.0.1:40620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:23:35.996787 systemd-logind[1302]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:23:35.997588 systemd-logind[1302]: Removed session 7. Sep 13 00:23:36.648891 kubelet[2096]: E0913 00:23:36.648854 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:36.898165 kubelet[2096]: E0913 00:23:36.898124 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:37.157000 audit[2488]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:37.157000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff458cc60 a2=0 a3=1 items=0 ppid=2207 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:37.162559 kernel: audit: type=1325 audit(1757723017.157:274): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:37.162637 kernel: audit: type=1300 audit(1757723017.157:274): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff458cc60 a2=0 a3=1 items=0 ppid=2207 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:37.162660 kernel: audit: type=1327 audit(1757723017.157:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:37.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:37.184000 audit[2488]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:37.187436 kernel: audit: type=1325 audit(1757723017.184:275): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:37.184000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff458cc60 a2=0 a3=1 items=0 ppid=2207 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:37.184000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:37.193410 kernel: audit: type=1300 audit(1757723017.184:275): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff458cc60 a2=0 a3=1 items=0 ppid=2207 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:37.202000 audit[2490]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:37.202000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe449f7d0 a2=0 a3=1 items=0 ppid=2207 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:37.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:37.207000 audit[2490]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:37.207000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe449f7d0 a2=0 a3=1 items=0 ppid=2207 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:37.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:40.002107 update_engine[1304]: I0913 00:23:40.002058 1304 update_attempter.cc:509] Updating boot flags... Sep 13 00:23:40.439000 audit[2507]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:40.439000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcd80da00 a2=0 a3=1 items=0 ppid=2207 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:40.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:40.451000 audit[2507]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:40.451000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcd80da00 a2=0 a3=1 items=0 ppid=2207 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:40.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:40.470000 audit[2509]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:40.470000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffdb748a90 a2=0 a3=1 items=0 ppid=2207 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:40.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:40.474000 audit[2509]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:40.474000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb748a90 a2=0 a3=1 items=0 ppid=2207 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:40.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:40.482776 kubelet[2096]: I0913 00:23:40.482724 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74d353b8-3700-4ed7-84ae-8aeea7caf1c3-tigera-ca-bundle\") pod \"calico-typha-578c6bf6f9-lbpbl\" (UID: \"74d353b8-3700-4ed7-84ae-8aeea7caf1c3\") " pod="calico-system/calico-typha-578c6bf6f9-lbpbl" Sep 13 00:23:40.482776 kubelet[2096]: I0913 00:23:40.482774 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/74d353b8-3700-4ed7-84ae-8aeea7caf1c3-typha-certs\") pod \"calico-typha-578c6bf6f9-lbpbl\" (UID: \"74d353b8-3700-4ed7-84ae-8aeea7caf1c3\") " pod="calico-system/calico-typha-578c6bf6f9-lbpbl" Sep 13 00:23:40.483158 kubelet[2096]: I0913 00:23:40.482794 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvdmt\" (UniqueName: \"kubernetes.io/projected/74d353b8-3700-4ed7-84ae-8aeea7caf1c3-kube-api-access-wvdmt\") pod \"calico-typha-578c6bf6f9-lbpbl\" (UID: \"74d353b8-3700-4ed7-84ae-8aeea7caf1c3\") " pod="calico-system/calico-typha-578c6bf6f9-lbpbl" Sep 13 00:23:40.770349 kubelet[2096]: E0913 00:23:40.770235 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:40.771127 env[1315]: time="2025-09-13T00:23:40.770775277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578c6bf6f9-lbpbl,Uid:74d353b8-3700-4ed7-84ae-8aeea7caf1c3,Namespace:calico-system,Attempt:0,}" Sep 13 00:23:40.785956 kubelet[2096]: I0913 00:23:40.785924 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrwbt\" (UniqueName: \"kubernetes.io/projected/9d58a4dd-9c15-4386-8022-99bef05536fc-kube-api-access-wrwbt\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.786470 kubelet[2096]: I0913 00:23:40.786354 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9d58a4dd-9c15-4386-8022-99bef05536fc-var-run-calico\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.786651 kubelet[2096]: I0913 00:23:40.786634 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d58a4dd-9c15-4386-8022-99bef05536fc-lib-modules\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.786790 kubelet[2096]: I0913 00:23:40.786774 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9d58a4dd-9c15-4386-8022-99bef05536fc-node-certs\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.786970 kubelet[2096]: I0913 00:23:40.786911 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d58a4dd-9c15-4386-8022-99bef05536fc-var-lib-calico\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.787104 kubelet[2096]: I0913 00:23:40.787089 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d58a4dd-9c15-4386-8022-99bef05536fc-xtables-lock\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.787242 kubelet[2096]: I0913 00:23:40.787226 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9d58a4dd-9c15-4386-8022-99bef05536fc-cni-net-dir\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.787475 kubelet[2096]: I0913 00:23:40.787367 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9d58a4dd-9c15-4386-8022-99bef05536fc-flexvol-driver-host\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.787475 kubelet[2096]: I0913 00:23:40.787439 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9d58a4dd-9c15-4386-8022-99bef05536fc-cni-bin-dir\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.787475 kubelet[2096]: I0913 00:23:40.787458 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9d58a4dd-9c15-4386-8022-99bef05536fc-policysync\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.787632 kubelet[2096]: I0913 00:23:40.787492 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d58a4dd-9c15-4386-8022-99bef05536fc-tigera-ca-bundle\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.787632 kubelet[2096]: I0913 00:23:40.787510 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9d58a4dd-9c15-4386-8022-99bef05536fc-cni-log-dir\") pod \"calico-node-c9dgc\" (UID: \"9d58a4dd-9c15-4386-8022-99bef05536fc\") " pod="calico-system/calico-node-c9dgc" Sep 13 00:23:40.788475 env[1315]: time="2025-09-13T00:23:40.788368610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:40.788475 env[1315]: time="2025-09-13T00:23:40.788457449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:40.788596 env[1315]: time="2025-09-13T00:23:40.788468129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:40.788662 env[1315]: time="2025-09-13T00:23:40.788628126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d31f37f0253c294e4bef367a95aeacc6ab55b13f7efc4edc7bb7cb1a35a62e23 pid=2519 runtime=io.containerd.runc.v2 Sep 13 00:23:40.865422 env[1315]: time="2025-09-13T00:23:40.865371907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578c6bf6f9-lbpbl,Uid:74d353b8-3700-4ed7-84ae-8aeea7caf1c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"d31f37f0253c294e4bef367a95aeacc6ab55b13f7efc4edc7bb7cb1a35a62e23\"" Sep 13 00:23:40.866341 kubelet[2096]: E0913 00:23:40.866260 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:40.867931 env[1315]: time="2025-09-13T00:23:40.867897863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:23:40.899040 kubelet[2096]: E0913 00:23:40.894456 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.899040 kubelet[2096]: W0913 00:23:40.894488 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.899040 kubelet[2096]: E0913 00:23:40.894510 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.904618 kubelet[2096]: E0913 00:23:40.902115 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xhmc" podUID="cdf08d6b-aedb-443c-a2b0-45b46a85e022" Sep 13 00:23:40.904995 kubelet[2096]: E0913 00:23:40.904974 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.904995 kubelet[2096]: W0913 00:23:40.904995 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.905084 kubelet[2096]: E0913 00:23:40.905010 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.910394 kubelet[2096]: E0913 00:23:40.910359 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.910394 kubelet[2096]: W0913 00:23:40.910389 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.910520 kubelet[2096]: E0913 00:23:40.910414 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.980511 kubelet[2096]: E0913 00:23:40.980479 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.980511 kubelet[2096]: W0913 00:23:40.980505 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.980678 kubelet[2096]: E0913 00:23:40.980526 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.980706 kubelet[2096]: E0913 00:23:40.980677 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.980706 kubelet[2096]: W0913 00:23:40.980687 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.980706 kubelet[2096]: E0913 00:23:40.980695 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.980847 kubelet[2096]: E0913 00:23:40.980827 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.980847 kubelet[2096]: W0913 00:23:40.980844 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.980900 kubelet[2096]: E0913 00:23:40.980853 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.981002 kubelet[2096]: E0913 00:23:40.980984 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.981081 kubelet[2096]: W0913 00:23:40.981002 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.981081 kubelet[2096]: E0913 00:23:40.981012 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.981175 kubelet[2096]: E0913 00:23:40.981155 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.981175 kubelet[2096]: W0913 00:23:40.981174 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.981245 kubelet[2096]: E0913 00:23:40.981183 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.981417 kubelet[2096]: E0913 00:23:40.981367 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.981417 kubelet[2096]: W0913 00:23:40.981392 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.981417 kubelet[2096]: E0913 00:23:40.981403 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.981640 kubelet[2096]: E0913 00:23:40.981549 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.981640 kubelet[2096]: W0913 00:23:40.981564 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.981640 kubelet[2096]: E0913 00:23:40.981577 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.981784 kubelet[2096]: E0913 00:23:40.981707 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.981784 kubelet[2096]: W0913 00:23:40.981719 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.981784 kubelet[2096]: E0913 00:23:40.981729 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.981889 kubelet[2096]: E0913 00:23:40.981868 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.981889 kubelet[2096]: W0913 00:23:40.981886 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.981952 kubelet[2096]: E0913 00:23:40.981894 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.982046 kubelet[2096]: E0913 00:23:40.982026 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.982046 kubelet[2096]: W0913 00:23:40.982038 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.982116 kubelet[2096]: E0913 00:23:40.982047 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.982207 kubelet[2096]: E0913 00:23:40.982186 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.982207 kubelet[2096]: W0913 00:23:40.982195 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.982207 kubelet[2096]: E0913 00:23:40.982204 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.982373 kubelet[2096]: E0913 00:23:40.982347 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.982373 kubelet[2096]: W0913 00:23:40.982356 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.982373 kubelet[2096]: E0913 00:23:40.982364 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.982539 kubelet[2096]: E0913 00:23:40.982518 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.982539 kubelet[2096]: W0913 00:23:40.982536 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.982601 kubelet[2096]: E0913 00:23:40.982545 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.983354 kubelet[2096]: E0913 00:23:40.983326 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.983354 kubelet[2096]: W0913 00:23:40.983353 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.983446 kubelet[2096]: E0913 00:23:40.983366 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.983563 kubelet[2096]: E0913 00:23:40.983546 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.983593 kubelet[2096]: W0913 00:23:40.983565 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.983593 kubelet[2096]: E0913 00:23:40.983576 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.983731 kubelet[2096]: E0913 00:23:40.983718 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.983762 kubelet[2096]: W0913 00:23:40.983735 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.983762 kubelet[2096]: E0913 00:23:40.983745 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.983944 kubelet[2096]: E0913 00:23:40.983930 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.983944 kubelet[2096]: W0913 00:23:40.983942 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.984007 kubelet[2096]: E0913 00:23:40.983951 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.984108 kubelet[2096]: E0913 00:23:40.984088 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.984108 kubelet[2096]: W0913 00:23:40.984106 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.984165 kubelet[2096]: E0913 00:23:40.984116 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.984264 kubelet[2096]: E0913 00:23:40.984247 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.984300 kubelet[2096]: W0913 00:23:40.984264 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.984300 kubelet[2096]: E0913 00:23:40.984273 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.984472 kubelet[2096]: E0913 00:23:40.984457 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.984518 kubelet[2096]: W0913 00:23:40.984471 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.984518 kubelet[2096]: E0913 00:23:40.984482 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.988943 kubelet[2096]: E0913 00:23:40.988783 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.988943 kubelet[2096]: W0913 00:23:40.988798 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.988943 kubelet[2096]: E0913 00:23:40.988809 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.988943 kubelet[2096]: I0913 00:23:40.988844 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdf08d6b-aedb-443c-a2b0-45b46a85e022-socket-dir\") pod \"csi-node-driver-7xhmc\" (UID: \"cdf08d6b-aedb-443c-a2b0-45b46a85e022\") " pod="calico-system/csi-node-driver-7xhmc" Sep 13 00:23:40.989270 kubelet[2096]: E0913 00:23:40.989165 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.989270 kubelet[2096]: W0913 00:23:40.989180 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.989270 kubelet[2096]: E0913 00:23:40.989209 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.989270 kubelet[2096]: I0913 00:23:40.989231 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdf08d6b-aedb-443c-a2b0-45b46a85e022-kubelet-dir\") pod \"csi-node-driver-7xhmc\" (UID: \"cdf08d6b-aedb-443c-a2b0-45b46a85e022\") " pod="calico-system/csi-node-driver-7xhmc" Sep 13 00:23:40.990501 kubelet[2096]: E0913 00:23:40.989441 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.990501 kubelet[2096]: W0913 00:23:40.989457 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.990501 kubelet[2096]: E0913 00:23:40.989478 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.990501 kubelet[2096]: E0913 00:23:40.989654 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.990501 kubelet[2096]: W0913 00:23:40.989662 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.990501 kubelet[2096]: E0913 00:23:40.989672 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.990501 kubelet[2096]: E0913 00:23:40.989944 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.990501 kubelet[2096]: W0913 00:23:40.989956 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.990501 kubelet[2096]: E0913 00:23:40.989970 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.990775 kubelet[2096]: I0913 00:23:40.989988 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdf08d6b-aedb-443c-a2b0-45b46a85e022-registration-dir\") pod \"csi-node-driver-7xhmc\" (UID: \"cdf08d6b-aedb-443c-a2b0-45b46a85e022\") " pod="calico-system/csi-node-driver-7xhmc" Sep 13 00:23:40.990775 kubelet[2096]: E0913 00:23:40.990253 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.990775 kubelet[2096]: W0913 00:23:40.990267 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.990775 kubelet[2096]: E0913 00:23:40.990279 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.990775 kubelet[2096]: I0913 00:23:40.990319 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cdf08d6b-aedb-443c-a2b0-45b46a85e022-varrun\") pod \"csi-node-driver-7xhmc\" (UID: \"cdf08d6b-aedb-443c-a2b0-45b46a85e022\") " pod="calico-system/csi-node-driver-7xhmc" Sep 13 00:23:40.990775 kubelet[2096]: E0913 00:23:40.990519 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.990775 kubelet[2096]: W0913 00:23:40.990531 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.990775 kubelet[2096]: E0913 00:23:40.990600 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.990949 kubelet[2096]: I0913 00:23:40.990630 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrm8\" (UniqueName: \"kubernetes.io/projected/cdf08d6b-aedb-443c-a2b0-45b46a85e022-kube-api-access-znrm8\") pod \"csi-node-driver-7xhmc\" (UID: \"cdf08d6b-aedb-443c-a2b0-45b46a85e022\") " pod="calico-system/csi-node-driver-7xhmc" Sep 13 00:23:40.990949 kubelet[2096]: E0913 00:23:40.990703 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.990949 kubelet[2096]: W0913 00:23:40.990712 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.990949 kubelet[2096]: E0913 00:23:40.990747 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.990949 kubelet[2096]: E0913 00:23:40.990864 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.990949 kubelet[2096]: W0913 00:23:40.990871 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.990949 kubelet[2096]: E0913 00:23:40.990882 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.991100 kubelet[2096]: E0913 00:23:40.991009 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.991100 kubelet[2096]: W0913 00:23:40.991017 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.991100 kubelet[2096]: E0913 00:23:40.991024 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.991704 kubelet[2096]: E0913 00:23:40.991351 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.991704 kubelet[2096]: W0913 00:23:40.991364 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.991704 kubelet[2096]: E0913 00:23:40.991411 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.991704 kubelet[2096]: E0913 00:23:40.991606 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.991704 kubelet[2096]: W0913 00:23:40.991624 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.991704 kubelet[2096]: E0913 00:23:40.991636 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.992654 kubelet[2096]: E0913 00:23:40.992633 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.992725 kubelet[2096]: W0913 00:23:40.992655 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.992725 kubelet[2096]: E0913 00:23:40.992672 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.992906 kubelet[2096]: E0913 00:23:40.992894 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.992906 kubelet[2096]: W0913 00:23:40.992907 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.992986 kubelet[2096]: E0913 00:23:40.992916 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:40.993276 kubelet[2096]: E0913 00:23:40.993257 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:40.993276 kubelet[2096]: W0913 00:23:40.993276 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:40.993358 kubelet[2096]: E0913 00:23:40.993296 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.003805 env[1315]: time="2025-09-13T00:23:41.003742096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c9dgc,Uid:9d58a4dd-9c15-4386-8022-99bef05536fc,Namespace:calico-system,Attempt:0,}" Sep 13 00:23:41.019589 env[1315]: time="2025-09-13T00:23:41.019513154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:41.019589 env[1315]: time="2025-09-13T00:23:41.019557953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:41.019766 env[1315]: time="2025-09-13T00:23:41.019567793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:41.019997 env[1315]: time="2025-09-13T00:23:41.019959867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60b66288ed18a0dbba1ec99c73b8b4053b0183daa488f83cd937157c4f6abc24 pid=2612 runtime=io.containerd.runc.v2 Sep 13 00:23:41.086176 env[1315]: time="2025-09-13T00:23:41.083733929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c9dgc,Uid:9d58a4dd-9c15-4386-8022-99bef05536fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"60b66288ed18a0dbba1ec99c73b8b4053b0183daa488f83cd937157c4f6abc24\"" Sep 13 00:23:41.091443 kubelet[2096]: E0913 00:23:41.091412 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.091443 kubelet[2096]: W0913 00:23:41.091430 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.091443 kubelet[2096]: E0913 00:23:41.091448 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.091926 kubelet[2096]: E0913 00:23:41.091658 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.091926 kubelet[2096]: W0913 00:23:41.091673 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.091926 kubelet[2096]: E0913 00:23:41.091696 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.092053 kubelet[2096]: E0913 00:23:41.091931 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.092053 kubelet[2096]: W0913 00:23:41.091945 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.092053 kubelet[2096]: E0913 00:23:41.091955 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.092133 kubelet[2096]: E0913 00:23:41.092120 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.092133 kubelet[2096]: W0913 00:23:41.092128 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.092185 kubelet[2096]: E0913 00:23:41.092136 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.093578 kubelet[2096]: E0913 00:23:41.092301 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.093578 kubelet[2096]: W0913 00:23:41.092315 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.093578 kubelet[2096]: E0913 00:23:41.092325 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.093578 kubelet[2096]: E0913 00:23:41.092509 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.093578 kubelet[2096]: W0913 00:23:41.092517 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.093578 kubelet[2096]: E0913 00:23:41.092526 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.093578 kubelet[2096]: E0913 00:23:41.092646 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.093578 kubelet[2096]: W0913 00:23:41.092662 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.093578 kubelet[2096]: E0913 00:23:41.092670 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.093578 kubelet[2096]: E0913 00:23:41.092782 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.093895 kubelet[2096]: W0913 00:23:41.092796 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.093895 kubelet[2096]: E0913 00:23:41.092804 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.093895 kubelet[2096]: E0913 00:23:41.092973 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.093895 kubelet[2096]: W0913 00:23:41.092981 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.093895 kubelet[2096]: E0913 00:23:41.092989 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.093895 kubelet[2096]: E0913 00:23:41.093147 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.093895 kubelet[2096]: W0913 00:23:41.093155 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.093895 kubelet[2096]: E0913 00:23:41.093163 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.093895 kubelet[2096]: E0913 00:23:41.093303 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.093895 kubelet[2096]: W0913 00:23:41.093312 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.094164 kubelet[2096]: E0913 00:23:41.093360 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.094164 kubelet[2096]: E0913 00:23:41.094059 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.094164 kubelet[2096]: W0913 00:23:41.094071 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.094164 kubelet[2096]: E0913 00:23:41.094140 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.094320 kubelet[2096]: E0913 00:23:41.094301 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.094320 kubelet[2096]: W0913 00:23:41.094311 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.094376 kubelet[2096]: E0913 00:23:41.094367 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.094610 kubelet[2096]: E0913 00:23:41.094519 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.094610 kubelet[2096]: W0913 00:23:41.094533 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.094610 kubelet[2096]: E0913 00:23:41.094576 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.094720 kubelet[2096]: E0913 00:23:41.094697 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.094720 kubelet[2096]: W0913 00:23:41.094705 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.094772 kubelet[2096]: E0913 00:23:41.094748 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.097981 kubelet[2096]: E0913 00:23:41.094872 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.097981 kubelet[2096]: W0913 00:23:41.094883 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.097981 kubelet[2096]: E0913 00:23:41.094898 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.097981 kubelet[2096]: E0913 00:23:41.095048 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.097981 kubelet[2096]: W0913 00:23:41.095056 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.097981 kubelet[2096]: E0913 00:23:41.095067 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.097981 kubelet[2096]: E0913 00:23:41.095408 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.097981 kubelet[2096]: W0913 00:23:41.095418 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.097981 kubelet[2096]: E0913 00:23:41.095430 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.097981 kubelet[2096]: E0913 00:23:41.095621 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.098313 kubelet[2096]: W0913 00:23:41.095631 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.098313 kubelet[2096]: E0913 00:23:41.095641 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.098313 kubelet[2096]: E0913 00:23:41.095786 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.098313 kubelet[2096]: W0913 00:23:41.095794 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.098313 kubelet[2096]: E0913 00:23:41.095802 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.098313 kubelet[2096]: E0913 00:23:41.095982 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.098313 kubelet[2096]: W0913 00:23:41.095990 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.098313 kubelet[2096]: E0913 00:23:41.095999 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.098313 kubelet[2096]: E0913 00:23:41.096153 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.098313 kubelet[2096]: W0913 00:23:41.096160 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.098579 kubelet[2096]: E0913 00:23:41.096169 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.098579 kubelet[2096]: E0913 00:23:41.096322 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.098579 kubelet[2096]: W0913 00:23:41.096331 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.098579 kubelet[2096]: E0913 00:23:41.096340 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.098579 kubelet[2096]: E0913 00:23:41.096526 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.098579 kubelet[2096]: W0913 00:23:41.096534 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.098579 kubelet[2096]: E0913 00:23:41.096543 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.098579 kubelet[2096]: E0913 00:23:41.096712 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.098579 kubelet[2096]: W0913 00:23:41.096720 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.098579 kubelet[2096]: E0913 00:23:41.096729 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.111053 kubelet[2096]: E0913 00:23:41.111034 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:41.111053 kubelet[2096]: W0913 00:23:41.111050 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:41.111168 kubelet[2096]: E0913 00:23:41.111064 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:41.488000 audit[2673]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:41.489827 kernel: kauditd_printk_skb: 19 callbacks suppressed Sep 13 00:23:41.489914 kernel: audit: type=1325 audit(1757723021.488:282): table=filter:97 family=2 entries=20 op=nft_register_rule pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:41.488000 audit[2673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe1c04220 a2=0 a3=1 items=0 ppid=2207 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:41.495726 kernel: audit: type=1300 audit(1757723021.488:282): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe1c04220 a2=0 a3=1 items=0 ppid=2207 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:41.495812 kernel: audit: type=1327 audit(1757723021.488:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:41.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:41.503000 audit[2673]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:41.503000 audit[2673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1c04220 a2=0 a3=1 items=0 ppid=2207 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:41.508735 kernel: audit: type=1325 audit(1757723021.503:283): table=nat:98 family=2 entries=12 op=nft_register_rule pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:41.508794 kernel: audit: type=1300 audit(1757723021.503:283): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1c04220 a2=0 a3=1 items=0 ppid=2207 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:41.508824 kernel: audit: type=1327 audit(1757723021.503:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:41.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:41.828332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815897190.mount: Deactivated successfully. Sep 13 00:23:42.482151 env[1315]: time="2025-09-13T00:23:42.480479746Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:42.490069 env[1315]: time="2025-09-13T00:23:42.484673240Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:42.490069 env[1315]: time="2025-09-13T00:23:42.487177760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:42.490069 env[1315]: time="2025-09-13T00:23:42.488836294Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:42.490069 env[1315]: time="2025-09-13T00:23:42.489244448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 00:23:42.494856 env[1315]: time="2025-09-13T00:23:42.494822600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:23:42.511653 env[1315]: time="2025-09-13T00:23:42.511616215Z" level=info msg="CreateContainer within sandbox \"d31f37f0253c294e4bef367a95aeacc6ab55b13f7efc4edc7bb7cb1a35a62e23\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:23:42.523621 env[1315]: time="2025-09-13T00:23:42.523578626Z" level=info msg="CreateContainer within sandbox \"d31f37f0253c294e4bef367a95aeacc6ab55b13f7efc4edc7bb7cb1a35a62e23\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"754ad81821b98f4ebdb6b9943ff7177b4a2a929698f75d53ed4b5651ab492a2c\"" Sep 13 00:23:42.524613 env[1315]: time="2025-09-13T00:23:42.524574170Z" level=info msg="StartContainer for \"754ad81821b98f4ebdb6b9943ff7177b4a2a929698f75d53ed4b5651ab492a2c\"" Sep 13 00:23:42.575359 kubelet[2096]: E0913 00:23:42.575302 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xhmc" podUID="cdf08d6b-aedb-443c-a2b0-45b46a85e022" Sep 13 00:23:42.595603 env[1315]: time="2025-09-13T00:23:42.595552010Z" level=info msg="StartContainer for \"754ad81821b98f4ebdb6b9943ff7177b4a2a929698f75d53ed4b5651ab492a2c\" returns successfully" Sep 13 00:23:42.629324 kubelet[2096]: E0913 00:23:42.629274 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:42.650654 kubelet[2096]: I0913 00:23:42.649100 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-578c6bf6f9-lbpbl" podStartSLOduration=1.02266822 podStartE2EDuration="2.649083085s" podCreationTimestamp="2025-09-13 00:23:40 +0000 UTC" firstStartedPulling="2025-09-13 00:23:40.867580588 +0000 UTC m=+18.434980927" lastFinishedPulling="2025-09-13 00:23:42.493995453 +0000 UTC m=+20.061395792" observedRunningTime="2025-09-13 00:23:42.648936447 +0000 UTC m=+20.216336786" watchObservedRunningTime="2025-09-13 00:23:42.649083085 +0000 UTC m=+20.216483424" Sep 13 00:23:42.695276 kubelet[2096]: E0913 00:23:42.695233 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.695276 kubelet[2096]: W0913 00:23:42.695268 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.695464 kubelet[2096]: E0913 00:23:42.695293 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.696489 kubelet[2096]: E0913 00:23:42.696465 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.696489 kubelet[2096]: W0913 00:23:42.696483 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.696609 kubelet[2096]: E0913 00:23:42.696497 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.696732 kubelet[2096]: E0913 00:23:42.696710 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.696732 kubelet[2096]: W0913 00:23:42.696723 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.696732 kubelet[2096]: E0913 00:23:42.696733 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.697428 kubelet[2096]: E0913 00:23:42.697377 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.697428 kubelet[2096]: W0913 00:23:42.697408 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.697428 kubelet[2096]: E0913 00:23:42.697418 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.697590 kubelet[2096]: E0913 00:23:42.697575 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.697590 kubelet[2096]: W0913 00:23:42.697588 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.697651 kubelet[2096]: E0913 00:23:42.697596 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.697730 kubelet[2096]: E0913 00:23:42.697719 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.697758 kubelet[2096]: W0913 00:23:42.697730 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.697758 kubelet[2096]: E0913 00:23:42.697739 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.697872 kubelet[2096]: E0913 00:23:42.697860 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.697905 kubelet[2096]: W0913 00:23:42.697872 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.697905 kubelet[2096]: E0913 00:23:42.697880 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.698012 kubelet[2096]: E0913 00:23:42.698001 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.698046 kubelet[2096]: W0913 00:23:42.698012 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.698046 kubelet[2096]: E0913 00:23:42.698021 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.698166 kubelet[2096]: E0913 00:23:42.698155 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.698194 kubelet[2096]: W0913 00:23:42.698166 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.698194 kubelet[2096]: E0913 00:23:42.698175 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.698319 kubelet[2096]: E0913 00:23:42.698306 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.698319 kubelet[2096]: W0913 00:23:42.698318 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.698372 kubelet[2096]: E0913 00:23:42.698327 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.698473 kubelet[2096]: E0913 00:23:42.698462 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.698473 kubelet[2096]: W0913 00:23:42.698473 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.698528 kubelet[2096]: E0913 00:23:42.698481 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.698644 kubelet[2096]: E0913 00:23:42.698631 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.698673 kubelet[2096]: W0913 00:23:42.698644 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.698673 kubelet[2096]: E0913 00:23:42.698662 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.698857 kubelet[2096]: E0913 00:23:42.698843 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.698857 kubelet[2096]: W0913 00:23:42.698855 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.698927 kubelet[2096]: E0913 00:23:42.698864 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.699012 kubelet[2096]: E0913 00:23:42.698999 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.699012 kubelet[2096]: W0913 00:23:42.699010 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.699070 kubelet[2096]: E0913 00:23:42.699019 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.699149 kubelet[2096]: E0913 00:23:42.699137 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.699181 kubelet[2096]: W0913 00:23:42.699149 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.699181 kubelet[2096]: E0913 00:23:42.699157 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.702714 kubelet[2096]: E0913 00:23:42.702544 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.702714 kubelet[2096]: W0913 00:23:42.702561 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.702714 kubelet[2096]: E0913 00:23:42.702574 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.703065 kubelet[2096]: E0913 00:23:42.702908 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.703065 kubelet[2096]: W0913 00:23:42.702921 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.703065 kubelet[2096]: E0913 00:23:42.702943 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.703413 kubelet[2096]: E0913 00:23:42.703241 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.703413 kubelet[2096]: W0913 00:23:42.703266 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.703413 kubelet[2096]: E0913 00:23:42.703286 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.704293 kubelet[2096]: E0913 00:23:42.703876 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.704293 kubelet[2096]: W0913 00:23:42.703890 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.704293 kubelet[2096]: E0913 00:23:42.704084 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.704676 kubelet[2096]: E0913 00:23:42.704479 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.704676 kubelet[2096]: W0913 00:23:42.704494 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.704676 kubelet[2096]: E0913 00:23:42.704510 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.705010 kubelet[2096]: E0913 00:23:42.704826 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.705010 kubelet[2096]: W0913 00:23:42.704839 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.705010 kubelet[2096]: E0913 00:23:42.704918 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.705282 kubelet[2096]: E0913 00:23:42.705153 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.705282 kubelet[2096]: W0913 00:23:42.705166 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.705282 kubelet[2096]: E0913 00:23:42.705241 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.705656 kubelet[2096]: E0913 00:23:42.705550 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.705656 kubelet[2096]: W0913 00:23:42.705565 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.705656 kubelet[2096]: E0913 00:23:42.705580 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.705967 kubelet[2096]: E0913 00:23:42.705829 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.705967 kubelet[2096]: W0913 00:23:42.705841 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.705967 kubelet[2096]: E0913 00:23:42.705855 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.706575 kubelet[2096]: E0913 00:23:42.706468 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.706575 kubelet[2096]: W0913 00:23:42.706484 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.706575 kubelet[2096]: E0913 00:23:42.706544 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.706944 kubelet[2096]: E0913 00:23:42.706836 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.706944 kubelet[2096]: W0913 00:23:42.706849 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.706944 kubelet[2096]: E0913 00:23:42.706894 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.707292 kubelet[2096]: E0913 00:23:42.707124 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.707292 kubelet[2096]: W0913 00:23:42.707141 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.707292 kubelet[2096]: E0913 00:23:42.707225 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.707650 kubelet[2096]: E0913 00:23:42.707500 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.707650 kubelet[2096]: W0913 00:23:42.707511 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.707650 kubelet[2096]: E0913 00:23:42.707527 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.709549 kubelet[2096]: E0913 00:23:42.707811 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.709549 kubelet[2096]: W0913 00:23:42.707822 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.709549 kubelet[2096]: E0913 00:23:42.707836 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.709549 kubelet[2096]: E0913 00:23:42.708885 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.709549 kubelet[2096]: W0913 00:23:42.708918 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.709549 kubelet[2096]: E0913 00:23:42.708933 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.709549 kubelet[2096]: E0913 00:23:42.709143 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.709549 kubelet[2096]: W0913 00:23:42.709150 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.709549 kubelet[2096]: E0913 00:23:42.709162 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.709885 kubelet[2096]: E0913 00:23:42.709595 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.709885 kubelet[2096]: W0913 00:23:42.709607 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.709885 kubelet[2096]: E0913 00:23:42.709620 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:42.714327 kubelet[2096]: E0913 00:23:42.714301 2096 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:23:42.714327 kubelet[2096]: W0913 00:23:42.714323 2096 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:23:42.714509 kubelet[2096]: E0913 00:23:42.714342 2096 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:23:43.384095 env[1315]: time="2025-09-13T00:23:43.383956535Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:43.386073 env[1315]: time="2025-09-13T00:23:43.386023664Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:43.388069 env[1315]: time="2025-09-13T00:23:43.387860596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:43.389522 env[1315]: time="2025-09-13T00:23:43.389486812Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:43.390021 env[1315]: time="2025-09-13T00:23:43.389988964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 00:23:43.396367 env[1315]: time="2025-09-13T00:23:43.396321189Z" level=info msg="CreateContainer within sandbox \"60b66288ed18a0dbba1ec99c73b8b4053b0183daa488f83cd937157c4f6abc24\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:23:43.410631 env[1315]: time="2025-09-13T00:23:43.410597975Z" level=info msg="CreateContainer within sandbox \"60b66288ed18a0dbba1ec99c73b8b4053b0183daa488f83cd937157c4f6abc24\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"869c54d77bee67209a12c30e5681f05c03764584c510eaf7f66dee059b5cc94a\"" Sep 13 00:23:43.411180 env[1315]: time="2025-09-13T00:23:43.411152046Z" level=info msg="StartContainer for \"869c54d77bee67209a12c30e5681f05c03764584c510eaf7f66dee059b5cc94a\"" Sep 13 00:23:43.494195 env[1315]: time="2025-09-13T00:23:43.494149759Z" level=info msg="StartContainer for \"869c54d77bee67209a12c30e5681f05c03764584c510eaf7f66dee059b5cc94a\" returns successfully" Sep 13 00:23:43.547822 env[1315]: time="2025-09-13T00:23:43.547777873Z" level=info msg="shim disconnected" id=869c54d77bee67209a12c30e5681f05c03764584c510eaf7f66dee059b5cc94a Sep 13 00:23:43.547822 env[1315]: time="2025-09-13T00:23:43.547820432Z" level=warning msg="cleaning up after shim disconnected" id=869c54d77bee67209a12c30e5681f05c03764584c510eaf7f66dee059b5cc94a namespace=k8s.io Sep 13 00:23:43.548032 env[1315]: time="2025-09-13T00:23:43.547829392Z" level=info msg="cleaning up dead shim" Sep 13 00:23:43.555000 env[1315]: time="2025-09-13T00:23:43.554961725Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:23:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2794 runtime=io.containerd.runc.v2\n" Sep 13 00:23:43.588516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-869c54d77bee67209a12c30e5681f05c03764584c510eaf7f66dee059b5cc94a-rootfs.mount: Deactivated successfully. Sep 13 00:23:43.633215 kubelet[2096]: I0913 00:23:43.633000 2096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:23:43.633581 kubelet[2096]: E0913 00:23:43.633412 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:43.634062 env[1315]: time="2025-09-13T00:23:43.633874579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:23:44.567948 kubelet[2096]: E0913 00:23:44.567879 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xhmc" podUID="cdf08d6b-aedb-443c-a2b0-45b46a85e022" Sep 13 00:23:46.402017 env[1315]: time="2025-09-13T00:23:46.401973130Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:46.404012 env[1315]: time="2025-09-13T00:23:46.403974264Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:46.405864 env[1315]: time="2025-09-13T00:23:46.405839640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:46.407919 env[1315]: time="2025-09-13T00:23:46.407882933Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:46.408486 env[1315]: time="2025-09-13T00:23:46.408455966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:23:46.413072 env[1315]: time="2025-09-13T00:23:46.413035586Z" level=info msg="CreateContainer within sandbox \"60b66288ed18a0dbba1ec99c73b8b4053b0183daa488f83cd937157c4f6abc24\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:23:46.426847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751993373.mount: Deactivated successfully. Sep 13 00:23:46.431579 env[1315]: time="2025-09-13T00:23:46.431546425Z" level=info msg="CreateContainer within sandbox \"60b66288ed18a0dbba1ec99c73b8b4053b0183daa488f83cd937157c4f6abc24\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d63cd763ecf5a4a0f765121542475aa95156efdc8bd656aba9aafa87126237d1\"" Sep 13 00:23:46.432130 env[1315]: time="2025-09-13T00:23:46.432103858Z" level=info msg="StartContainer for \"d63cd763ecf5a4a0f765121542475aa95156efdc8bd656aba9aafa87126237d1\"" Sep 13 00:23:46.488310 env[1315]: time="2025-09-13T00:23:46.488263045Z" level=info msg="StartContainer for \"d63cd763ecf5a4a0f765121542475aa95156efdc8bd656aba9aafa87126237d1\" returns successfully" Sep 13 00:23:46.568091 kubelet[2096]: E0913 00:23:46.567510 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xhmc" podUID="cdf08d6b-aedb-443c-a2b0-45b46a85e022" Sep 13 00:23:47.114171 env[1315]: time="2025-09-13T00:23:47.114098191Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:23:47.137333 env[1315]: time="2025-09-13T00:23:47.137290382Z" level=info msg="shim disconnected" id=d63cd763ecf5a4a0f765121542475aa95156efdc8bd656aba9aafa87126237d1 Sep 13 00:23:47.137578 env[1315]: time="2025-09-13T00:23:47.137557219Z" level=warning msg="cleaning up after shim disconnected" id=d63cd763ecf5a4a0f765121542475aa95156efdc8bd656aba9aafa87126237d1 namespace=k8s.io Sep 13 00:23:47.137651 env[1315]: time="2025-09-13T00:23:47.137637818Z" level=info msg="cleaning up dead shim" Sep 13 00:23:47.151918 env[1315]: time="2025-09-13T00:23:47.151821841Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:23:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2867 runtime=io.containerd.runc.v2\n" Sep 13 00:23:47.165811 kubelet[2096]: I0913 00:23:47.165780 2096 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:23:47.333837 kubelet[2096]: I0913 00:23:47.333743 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc82s\" (UniqueName: \"kubernetes.io/projected/c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db-kube-api-access-rc82s\") pod \"coredns-7c65d6cfc9-bfvmh\" (UID: \"c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db\") " pod="kube-system/coredns-7c65d6cfc9-bfvmh" Sep 13 00:23:47.333837 kubelet[2096]: I0913 00:23:47.333843 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e4e83a-e9a0-426d-ae5e-a20862c0016b-whisker-ca-bundle\") pod \"whisker-56898f466d-kk7x6\" (UID: \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\") " pod="calico-system/whisker-56898f466d-kk7x6" Sep 13 00:23:47.334026 kubelet[2096]: I0913 00:23:47.333864 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/734b3a8a-120f-488b-a1eb-812d2e9a1288-tigera-ca-bundle\") pod \"calico-kube-controllers-589698f46b-2w2b2\" (UID: \"734b3a8a-120f-488b-a1eb-812d2e9a1288\") " pod="calico-system/calico-kube-controllers-589698f46b-2w2b2" Sep 13 00:23:47.334026 kubelet[2096]: I0913 00:23:47.333881 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0c72a36-f574-485d-b83f-4271860bd697-config-volume\") pod \"coredns-7c65d6cfc9-c5t9w\" (UID: \"c0c72a36-f574-485d-b83f-4271860bd697\") " pod="kube-system/coredns-7c65d6cfc9-c5t9w" Sep 13 00:23:47.334026 kubelet[2096]: I0913 00:23:47.333921 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vx9\" (UniqueName: \"kubernetes.io/projected/87e4e83a-e9a0-426d-ae5e-a20862c0016b-kube-api-access-j4vx9\") pod \"whisker-56898f466d-kk7x6\" (UID: \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\") " pod="calico-system/whisker-56898f466d-kk7x6" Sep 13 00:23:47.334252 kubelet[2096]: I0913 00:23:47.334233 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnxd4\" (UniqueName: \"kubernetes.io/projected/893d60eb-0d9b-45af-8fda-3d0f54249b41-kube-api-access-cnxd4\") pod \"goldmane-7988f88666-r22kc\" (UID: \"893d60eb-0d9b-45af-8fda-3d0f54249b41\") " pod="calico-system/goldmane-7988f88666-r22kc" Sep 13 00:23:47.334333 kubelet[2096]: I0913 00:23:47.334298 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czphs\" (UniqueName: \"kubernetes.io/projected/734b3a8a-120f-488b-a1eb-812d2e9a1288-kube-api-access-czphs\") pod \"calico-kube-controllers-589698f46b-2w2b2\" (UID: \"734b3a8a-120f-488b-a1eb-812d2e9a1288\") " pod="calico-system/calico-kube-controllers-589698f46b-2w2b2" Sep 13 00:23:47.334377 kubelet[2096]: I0913 00:23:47.334342 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e90ef52a-67ed-4ab0-b978-c57c4259dadf-calico-apiserver-certs\") pod \"calico-apiserver-59db8f9d95-s27wv\" (UID: \"e90ef52a-67ed-4ab0-b978-c57c4259dadf\") " pod="calico-apiserver/calico-apiserver-59db8f9d95-s27wv" Sep 13 00:23:47.334377 kubelet[2096]: I0913 00:23:47.334362 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87e4e83a-e9a0-426d-ae5e-a20862c0016b-whisker-backend-key-pair\") pod \"whisker-56898f466d-kk7x6\" (UID: \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\") " pod="calico-system/whisker-56898f466d-kk7x6" Sep 13 00:23:47.334455 kubelet[2096]: I0913 00:23:47.334400 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5f5c1be-6355-4194-9496-65fe9b497b32-calico-apiserver-certs\") pod \"calico-apiserver-59db8f9d95-nhnkq\" (UID: \"a5f5c1be-6355-4194-9496-65fe9b497b32\") " pod="calico-apiserver/calico-apiserver-59db8f9d95-nhnkq" Sep 13 00:23:47.334455 kubelet[2096]: I0913 00:23:47.334425 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwk2g\" (UniqueName: \"kubernetes.io/projected/a5f5c1be-6355-4194-9496-65fe9b497b32-kube-api-access-kwk2g\") pod \"calico-apiserver-59db8f9d95-nhnkq\" (UID: \"a5f5c1be-6355-4194-9496-65fe9b497b32\") " pod="calico-apiserver/calico-apiserver-59db8f9d95-nhnkq" Sep 13 00:23:47.334455 kubelet[2096]: I0913 00:23:47.334449 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893d60eb-0d9b-45af-8fda-3d0f54249b41-config\") pod \"goldmane-7988f88666-r22kc\" (UID: \"893d60eb-0d9b-45af-8fda-3d0f54249b41\") " pod="calico-system/goldmane-7988f88666-r22kc" Sep 13 00:23:47.334535 kubelet[2096]: I0913 00:23:47.334480 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rjqw\" (UniqueName: \"kubernetes.io/projected/c0c72a36-f574-485d-b83f-4271860bd697-kube-api-access-7rjqw\") pod \"coredns-7c65d6cfc9-c5t9w\" (UID: \"c0c72a36-f574-485d-b83f-4271860bd697\") " pod="kube-system/coredns-7c65d6cfc9-c5t9w" Sep 13 00:23:47.334535 kubelet[2096]: I0913 00:23:47.334498 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/893d60eb-0d9b-45af-8fda-3d0f54249b41-goldmane-key-pair\") pod \"goldmane-7988f88666-r22kc\" (UID: \"893d60eb-0d9b-45af-8fda-3d0f54249b41\") " pod="calico-system/goldmane-7988f88666-r22kc" Sep 13 00:23:47.334535 kubelet[2096]: I0913 00:23:47.334515 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/893d60eb-0d9b-45af-8fda-3d0f54249b41-goldmane-ca-bundle\") pod \"goldmane-7988f88666-r22kc\" (UID: \"893d60eb-0d9b-45af-8fda-3d0f54249b41\") " pod="calico-system/goldmane-7988f88666-r22kc" Sep 13 00:23:47.334535 kubelet[2096]: I0913 00:23:47.334532 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xtcb\" (UniqueName: \"kubernetes.io/projected/e90ef52a-67ed-4ab0-b978-c57c4259dadf-kube-api-access-8xtcb\") pod \"calico-apiserver-59db8f9d95-s27wv\" (UID: \"e90ef52a-67ed-4ab0-b978-c57c4259dadf\") " pod="calico-apiserver/calico-apiserver-59db8f9d95-s27wv" Sep 13 00:23:47.334632 kubelet[2096]: I0913 00:23:47.334549 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db-config-volume\") pod \"coredns-7c65d6cfc9-bfvmh\" (UID: \"c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db\") " pod="kube-system/coredns-7c65d6cfc9-bfvmh" Sep 13 00:23:47.424271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d63cd763ecf5a4a0f765121542475aa95156efdc8bd656aba9aafa87126237d1-rootfs.mount: Deactivated successfully. Sep 13 00:23:47.497493 kubelet[2096]: E0913 00:23:47.497421 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:47.498412 env[1315]: time="2025-09-13T00:23:47.498246966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c5t9w,Uid:c0c72a36-f574-485d-b83f-4271860bd697,Namespace:kube-system,Attempt:0,}" Sep 13 00:23:47.506458 env[1315]: time="2025-09-13T00:23:47.506423584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56898f466d-kk7x6,Uid:87e4e83a-e9a0-426d-ae5e-a20862c0016b,Namespace:calico-system,Attempt:0,}" Sep 13 00:23:47.506739 env[1315]: time="2025-09-13T00:23:47.506694541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-r22kc,Uid:893d60eb-0d9b-45af-8fda-3d0f54249b41,Namespace:calico-system,Attempt:0,}" Sep 13 00:23:47.513743 env[1315]: time="2025-09-13T00:23:47.513697413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59db8f9d95-s27wv,Uid:e90ef52a-67ed-4ab0-b978-c57c4259dadf,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:23:47.517244 env[1315]: time="2025-09-13T00:23:47.516909133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59db8f9d95-nhnkq,Uid:a5f5c1be-6355-4194-9496-65fe9b497b32,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:23:47.525298 kubelet[2096]: E0913 00:23:47.525219 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:47.526460 env[1315]: time="2025-09-13T00:23:47.526394535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589698f46b-2w2b2,Uid:734b3a8a-120f-488b-a1eb-812d2e9a1288,Namespace:calico-system,Attempt:0,}" Sep 13 00:23:47.526632 env[1315]: time="2025-09-13T00:23:47.526608573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfvmh,Uid:c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db,Namespace:kube-system,Attempt:0,}" Sep 13 00:23:47.646984 env[1315]: time="2025-09-13T00:23:47.646931274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:23:47.649222 env[1315]: time="2025-09-13T00:23:47.648255417Z" level=error msg="Failed to destroy network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.652861 env[1315]: time="2025-09-13T00:23:47.652822920Z" level=error msg="Failed to destroy network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.654435 env[1315]: time="2025-09-13T00:23:47.654399261Z" level=error msg="encountered an error cleaning up failed sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.654578 env[1315]: time="2025-09-13T00:23:47.654548739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfvmh,Uid:c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.654712 env[1315]: time="2025-09-13T00:23:47.654413861Z" level=error msg="encountered an error cleaning up failed sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.654967 env[1315]: time="2025-09-13T00:23:47.654933214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56898f466d-kk7x6,Uid:87e4e83a-e9a0-426d-ae5e-a20862c0016b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.655955 kubelet[2096]: E0913 00:23:47.655649 2096 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.655955 kubelet[2096]: E0913 00:23:47.655710 2096 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bfvmh" Sep 13 00:23:47.655955 kubelet[2096]: E0913 00:23:47.655730 2096 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bfvmh" Sep 13 00:23:47.656302 kubelet[2096]: E0913 00:23:47.655779 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bfvmh_kube-system(c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bfvmh_kube-system(c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bfvmh" podUID="c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db" Sep 13 00:23:47.656624 kubelet[2096]: E0913 00:23:47.656498 2096 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.656624 kubelet[2096]: E0913 00:23:47.656534 2096 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56898f466d-kk7x6" Sep 13 00:23:47.656624 kubelet[2096]: E0913 00:23:47.656550 2096 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56898f466d-kk7x6" Sep 13 00:23:47.656749 kubelet[2096]: E0913 00:23:47.656577 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56898f466d-kk7x6_calico-system(87e4e83a-e9a0-426d-ae5e-a20862c0016b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56898f466d-kk7x6_calico-system(87e4e83a-e9a0-426d-ae5e-a20862c0016b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56898f466d-kk7x6" podUID="87e4e83a-e9a0-426d-ae5e-a20862c0016b" Sep 13 00:23:47.658117 env[1315]: time="2025-09-13T00:23:47.658077975Z" level=error msg="Failed to destroy network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.658536 env[1315]: time="2025-09-13T00:23:47.658502450Z" level=error msg="encountered an error cleaning up failed sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.658646 env[1315]: time="2025-09-13T00:23:47.658618808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-r22kc,Uid:893d60eb-0d9b-45af-8fda-3d0f54249b41,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.658965 kubelet[2096]: E0913 00:23:47.658842 2096 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.658965 kubelet[2096]: E0913 00:23:47.658883 2096 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-r22kc" Sep 13 00:23:47.658965 kubelet[2096]: E0913 00:23:47.658898 2096 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-r22kc" Sep 13 00:23:47.659081 kubelet[2096]: E0913 00:23:47.658924 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-r22kc_calico-system(893d60eb-0d9b-45af-8fda-3d0f54249b41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-r22kc_calico-system(893d60eb-0d9b-45af-8fda-3d0f54249b41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-r22kc" podUID="893d60eb-0d9b-45af-8fda-3d0f54249b41" Sep 13 00:23:47.664478 env[1315]: time="2025-09-13T00:23:47.662522400Z" level=error msg="Failed to destroy network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.664950 env[1315]: time="2025-09-13T00:23:47.664911610Z" level=error msg="encountered an error cleaning up failed sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.665119 env[1315]: time="2025-09-13T00:23:47.665088728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59db8f9d95-s27wv,Uid:e90ef52a-67ed-4ab0-b978-c57c4259dadf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.665512 kubelet[2096]: E0913 00:23:47.665342 2096 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.665512 kubelet[2096]: E0913 00:23:47.665414 2096 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59db8f9d95-s27wv" Sep 13 00:23:47.665512 kubelet[2096]: E0913 00:23:47.665433 2096 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59db8f9d95-s27wv" Sep 13 00:23:47.665671 kubelet[2096]: E0913 00:23:47.665468 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59db8f9d95-s27wv_calico-apiserver(e90ef52a-67ed-4ab0-b978-c57c4259dadf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59db8f9d95-s27wv_calico-apiserver(e90ef52a-67ed-4ab0-b978-c57c4259dadf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59db8f9d95-s27wv" podUID="e90ef52a-67ed-4ab0-b978-c57c4259dadf" Sep 13 00:23:47.672894 env[1315]: time="2025-09-13T00:23:47.672841671Z" level=error msg="Failed to destroy network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.673213 env[1315]: time="2025-09-13T00:23:47.673179427Z" level=error msg="encountered an error cleaning up failed sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.673267 env[1315]: time="2025-09-13T00:23:47.673228426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c5t9w,Uid:c0c72a36-f574-485d-b83f-4271860bd697,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.673654 kubelet[2096]: E0913 00:23:47.673392 2096 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.673654 kubelet[2096]: E0913 00:23:47.673434 2096 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-c5t9w" Sep 13 00:23:47.673654 kubelet[2096]: E0913 00:23:47.673460 2096 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-c5t9w" Sep 13 00:23:47.673842 kubelet[2096]: E0913 00:23:47.673491 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-c5t9w_kube-system(c0c72a36-f574-485d-b83f-4271860bd697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-c5t9w_kube-system(c0c72a36-f574-485d-b83f-4271860bd697)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-c5t9w" podUID="c0c72a36-f574-485d-b83f-4271860bd697" Sep 13 00:23:47.681578 env[1315]: time="2025-09-13T00:23:47.681488923Z" level=error msg="Failed to destroy network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.683985 env[1315]: time="2025-09-13T00:23:47.683941933Z" level=error msg="encountered an error cleaning up failed sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.684064 env[1315]: time="2025-09-13T00:23:47.684000652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59db8f9d95-nhnkq,Uid:a5f5c1be-6355-4194-9496-65fe9b497b32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.684218 kubelet[2096]: E0913 00:23:47.684191 2096 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.684267 kubelet[2096]: E0913 00:23:47.684232 2096 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59db8f9d95-nhnkq" Sep 13 00:23:47.684267 kubelet[2096]: E0913 00:23:47.684248 2096 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59db8f9d95-nhnkq" Sep 13 00:23:47.684327 kubelet[2096]: E0913 00:23:47.684295 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59db8f9d95-nhnkq_calico-apiserver(a5f5c1be-6355-4194-9496-65fe9b497b32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59db8f9d95-nhnkq_calico-apiserver(a5f5c1be-6355-4194-9496-65fe9b497b32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59db8f9d95-nhnkq" podUID="a5f5c1be-6355-4194-9496-65fe9b497b32" Sep 13 00:23:47.693155 env[1315]: time="2025-09-13T00:23:47.693101739Z" level=error msg="Failed to destroy network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.693460 env[1315]: time="2025-09-13T00:23:47.693430614Z" level=error msg="encountered an error cleaning up failed sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.693507 env[1315]: time="2025-09-13T00:23:47.693482494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589698f46b-2w2b2,Uid:734b3a8a-120f-488b-a1eb-812d2e9a1288,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.693692 kubelet[2096]: E0913 00:23:47.693644 2096 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:47.693740 kubelet[2096]: E0913 00:23:47.693704 2096 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589698f46b-2w2b2" Sep 13 00:23:47.693740 kubelet[2096]: E0913 00:23:47.693720 2096 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589698f46b-2w2b2" Sep 13 00:23:47.693797 kubelet[2096]: E0913 00:23:47.693762 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589698f46b-2w2b2_calico-system(734b3a8a-120f-488b-a1eb-812d2e9a1288)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589698f46b-2w2b2_calico-system(734b3a8a-120f-488b-a1eb-812d2e9a1288)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589698f46b-2w2b2" podUID="734b3a8a-120f-488b-a1eb-812d2e9a1288" Sep 13 00:23:48.571662 env[1315]: time="2025-09-13T00:23:48.571221108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xhmc,Uid:cdf08d6b-aedb-443c-a2b0-45b46a85e022,Namespace:calico-system,Attempt:0,}" Sep 13 00:23:48.634052 env[1315]: time="2025-09-13T00:23:48.633991801Z" level=error msg="Failed to destroy network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.640467 env[1315]: time="2025-09-13T00:23:48.640364605Z" level=error msg="encountered an error cleaning up failed sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.641823 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a-shm.mount: Deactivated successfully. Sep 13 00:23:48.642211 env[1315]: time="2025-09-13T00:23:48.642171423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xhmc,Uid:cdf08d6b-aedb-443c-a2b0-45b46a85e022,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.643008 kubelet[2096]: E0913 00:23:48.642960 2096 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.643098 kubelet[2096]: E0913 00:23:48.643021 2096 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xhmc" Sep 13 00:23:48.643098 kubelet[2096]: E0913 00:23:48.643040 2096 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xhmc" Sep 13 00:23:48.643098 kubelet[2096]: E0913 00:23:48.643079 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7xhmc_calico-system(cdf08d6b-aedb-443c-a2b0-45b46a85e022)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7xhmc_calico-system(cdf08d6b-aedb-443c-a2b0-45b46a85e022)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7xhmc" podUID="cdf08d6b-aedb-443c-a2b0-45b46a85e022" Sep 13 00:23:48.648048 kubelet[2096]: I0913 00:23:48.647994 2096 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:23:48.650942 kubelet[2096]: I0913 00:23:48.650915 2096 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:23:48.651942 env[1315]: time="2025-09-13T00:23:48.651915147Z" level=info msg="StopPodSandbox for \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\"" Sep 13 00:23:48.652213 env[1315]: time="2025-09-13T00:23:48.650775401Z" level=info msg="StopPodSandbox for \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\"" Sep 13 00:23:48.654168 kubelet[2096]: I0913 00:23:48.654107 2096 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:23:48.654760 env[1315]: time="2025-09-13T00:23:48.654717754Z" level=info msg="StopPodSandbox for \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\"" Sep 13 00:23:48.668861 kubelet[2096]: I0913 00:23:48.668829 2096 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:23:48.669767 env[1315]: time="2025-09-13T00:23:48.669721615Z" level=info msg="StopPodSandbox for \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\"" Sep 13 00:23:48.670951 kubelet[2096]: I0913 00:23:48.670914 2096 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:23:48.671551 env[1315]: time="2025-09-13T00:23:48.671516234Z" level=info msg="StopPodSandbox for \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\"" Sep 13 00:23:48.672895 kubelet[2096]: I0913 00:23:48.672865 2096 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:23:48.673957 env[1315]: time="2025-09-13T00:23:48.673921885Z" level=info msg="StopPodSandbox for \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\"" Sep 13 00:23:48.675929 kubelet[2096]: I0913 00:23:48.675502 2096 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:23:48.676919 env[1315]: time="2025-09-13T00:23:48.676670892Z" level=info msg="StopPodSandbox for \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\"" Sep 13 00:23:48.676995 kubelet[2096]: I0913 00:23:48.676783 2096 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:23:48.679538 env[1315]: time="2025-09-13T00:23:48.679495979Z" level=info msg="StopPodSandbox for \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\"" Sep 13 00:23:48.712749 env[1315]: time="2025-09-13T00:23:48.712661224Z" level=error msg="StopPodSandbox for \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\" failed" error="failed to destroy network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.713234 kubelet[2096]: E0913 00:23:48.713115 2096 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:23:48.713355 kubelet[2096]: E0913 00:23:48.713226 2096 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411"} Sep 13 00:23:48.713355 kubelet[2096]: E0913 00:23:48.713307 2096 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5f5c1be-6355-4194-9496-65fe9b497b32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:23:48.713464 kubelet[2096]: E0913 00:23:48.713331 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5f5c1be-6355-4194-9496-65fe9b497b32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59db8f9d95-nhnkq" podUID="a5f5c1be-6355-4194-9496-65fe9b497b32" Sep 13 00:23:48.784606 env[1315]: time="2025-09-13T00:23:48.784534088Z" level=error msg="StopPodSandbox for \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\" failed" error="failed to destroy network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.784853 kubelet[2096]: E0913 00:23:48.784769 2096 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:23:48.784853 kubelet[2096]: E0913 00:23:48.784844 2096 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6"} Sep 13 00:23:48.784979 kubelet[2096]: E0913 00:23:48.784875 2096 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"893d60eb-0d9b-45af-8fda-3d0f54249b41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:23:48.784979 kubelet[2096]: E0913 00:23:48.784897 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"893d60eb-0d9b-45af-8fda-3d0f54249b41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-r22kc" podUID="893d60eb-0d9b-45af-8fda-3d0f54249b41" Sep 13 00:23:48.790612 env[1315]: time="2025-09-13T00:23:48.790550256Z" level=error msg="StopPodSandbox for \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\" failed" error="failed to destroy network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.790942 kubelet[2096]: E0913 00:23:48.790898 2096 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:23:48.791006 kubelet[2096]: E0913 00:23:48.790961 2096 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007"} Sep 13 00:23:48.791006 kubelet[2096]: E0913 00:23:48.790993 2096 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"734b3a8a-120f-488b-a1eb-812d2e9a1288\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:23:48.791089 kubelet[2096]: E0913 00:23:48.791023 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"734b3a8a-120f-488b-a1eb-812d2e9a1288\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589698f46b-2w2b2" podUID="734b3a8a-120f-488b-a1eb-812d2e9a1288" Sep 13 00:23:48.796501 env[1315]: time="2025-09-13T00:23:48.796439786Z" level=error msg="StopPodSandbox for \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\" failed" error="failed to destroy network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.796650 env[1315]: time="2025-09-13T00:23:48.796457706Z" level=error msg="StopPodSandbox for \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\" failed" error="failed to destroy network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.796965 kubelet[2096]: E0913 00:23:48.796833 2096 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:23:48.796965 kubelet[2096]: E0913 00:23:48.796887 2096 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a"} Sep 13 00:23:48.796965 kubelet[2096]: E0913 00:23:48.796921 2096 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdf08d6b-aedb-443c-a2b0-45b46a85e022\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:23:48.796965 kubelet[2096]: E0913 00:23:48.796836 2096 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:23:48.797211 kubelet[2096]: E0913 00:23:48.796974 2096 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241"} Sep 13 00:23:48.797211 kubelet[2096]: E0913 00:23:48.797006 2096 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c0c72a36-f574-485d-b83f-4271860bd697\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:23:48.797211 kubelet[2096]: E0913 00:23:48.797025 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c0c72a36-f574-485d-b83f-4271860bd697\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-c5t9w" podUID="c0c72a36-f574-485d-b83f-4271860bd697" Sep 13 00:23:48.797211 kubelet[2096]: E0913 00:23:48.796944 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdf08d6b-aedb-443c-a2b0-45b46a85e022\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7xhmc" podUID="cdf08d6b-aedb-443c-a2b0-45b46a85e022" Sep 13 00:23:48.799030 env[1315]: time="2025-09-13T00:23:48.798959516Z" level=error msg="StopPodSandbox for \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\" failed" error="failed to destroy network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.799200 kubelet[2096]: E0913 00:23:48.799165 2096 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:23:48.799259 kubelet[2096]: E0913 00:23:48.799209 2096 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440"} Sep 13 00:23:48.799259 kubelet[2096]: E0913 00:23:48.799244 2096 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e90ef52a-67ed-4ab0-b978-c57c4259dadf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:23:48.799341 kubelet[2096]: E0913 00:23:48.799264 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e90ef52a-67ed-4ab0-b978-c57c4259dadf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59db8f9d95-s27wv" podUID="e90ef52a-67ed-4ab0-b978-c57c4259dadf" Sep 13 00:23:48.802583 env[1315]: time="2025-09-13T00:23:48.802527793Z" level=error msg="StopPodSandbox for \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\" failed" error="failed to destroy network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.802809 kubelet[2096]: E0913 00:23:48.802777 2096 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:23:48.802855 kubelet[2096]: E0913 00:23:48.802827 2096 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76"} Sep 13 00:23:48.802884 kubelet[2096]: E0913 00:23:48.802865 2096 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:23:48.802930 kubelet[2096]: E0913 00:23:48.802890 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bfvmh" podUID="c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db" Sep 13 00:23:48.811664 env[1315]: time="2025-09-13T00:23:48.811609965Z" level=error msg="StopPodSandbox for \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\" failed" error="failed to destroy network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:23:48.812668 kubelet[2096]: E0913 00:23:48.811818 2096 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:23:48.812668 kubelet[2096]: E0913 00:23:48.811862 2096 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d"} Sep 13 00:23:48.812668 kubelet[2096]: E0913 00:23:48.811891 2096 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:23:48.812668 kubelet[2096]: E0913 00:23:48.811912 2096 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56898f466d-kk7x6" podUID="87e4e83a-e9a0-426d-ae5e-a20862c0016b" Sep 13 00:23:50.407559 kubelet[2096]: I0913 00:23:50.407417 2096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:23:50.407967 kubelet[2096]: E0913 00:23:50.407757 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:50.451000 audit[3307]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:50.451000 audit[3307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffedb52530 a2=0 a3=1 items=0 ppid=2207 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:50.456850 kernel: audit: type=1325 audit(1757723030.451:284): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:50.456921 kernel: audit: type=1300 audit(1757723030.451:284): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffedb52530 a2=0 a3=1 items=0 ppid=2207 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:50.456945 kernel: audit: type=1327 audit(1757723030.451:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:50.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:50.459000 audit[3307]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:50.459000 audit[3307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffedb52530 a2=0 a3=1 items=0 ppid=2207 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:50.464242 kernel: audit: type=1325 audit(1757723030.459:285): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:50.464347 kernel: audit: type=1300 audit(1757723030.459:285): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffedb52530 a2=0 a3=1 items=0 ppid=2207 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:50.464377 kernel: audit: type=1327 audit(1757723030.459:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:50.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:50.680819 kubelet[2096]: E0913 00:23:50.680720 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:51.946419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310518659.mount: Deactivated successfully. Sep 13 00:23:52.204152 env[1315]: time="2025-09-13T00:23:52.204014363Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:52.206059 env[1315]: time="2025-09-13T00:23:52.206015943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:52.207523 env[1315]: time="2025-09-13T00:23:52.207497528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:52.208838 env[1315]: time="2025-09-13T00:23:52.208802035Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:52.209308 env[1315]: time="2025-09-13T00:23:52.209280270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:23:52.227891 env[1315]: time="2025-09-13T00:23:52.227848884Z" level=info msg="CreateContainer within sandbox \"60b66288ed18a0dbba1ec99c73b8b4053b0183daa488f83cd937157c4f6abc24\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:23:52.246186 env[1315]: time="2025-09-13T00:23:52.246143420Z" level=info msg="CreateContainer within sandbox \"60b66288ed18a0dbba1ec99c73b8b4053b0183daa488f83cd937157c4f6abc24\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c15bfcaebaec07fca55fa274eb1be991b378b82fc55344d4383e1a3bc3617055\"" Sep 13 00:23:52.247226 env[1315]: time="2025-09-13T00:23:52.247181089Z" level=info msg="StartContainer for \"c15bfcaebaec07fca55fa274eb1be991b378b82fc55344d4383e1a3bc3617055\"" Sep 13 00:23:52.305639 env[1315]: time="2025-09-13T00:23:52.305587062Z" level=info msg="StartContainer for \"c15bfcaebaec07fca55fa274eb1be991b378b82fc55344d4383e1a3bc3617055\" returns successfully" Sep 13 00:23:52.428059 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:23:52.428174 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:23:52.527728 env[1315]: time="2025-09-13T00:23:52.527614272Z" level=info msg="StopPodSandbox for \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\"" Sep 13 00:23:52.709745 kubelet[2096]: I0913 00:23:52.709541 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c9dgc" podStartSLOduration=1.584145334 podStartE2EDuration="12.709521284s" podCreationTimestamp="2025-09-13 00:23:40 +0000 UTC" firstStartedPulling="2025-09-13 00:23:41.08490279 +0000 UTC m=+18.652303089" lastFinishedPulling="2025-09-13 00:23:52.2102787 +0000 UTC m=+29.777679039" observedRunningTime="2025-09-13 00:23:52.709268766 +0000 UTC m=+30.276669105" watchObservedRunningTime="2025-09-13 00:23:52.709521284 +0000 UTC m=+30.276921583" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.647 [INFO][3373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.648 [INFO][3373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" iface="eth0" netns="/var/run/netns/cni-49691db2-a9cb-3a01-4b92-ac05a39d3303" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.650 [INFO][3373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" iface="eth0" netns="/var/run/netns/cni-49691db2-a9cb-3a01-4b92-ac05a39d3303" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.651 [INFO][3373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" iface="eth0" netns="/var/run/netns/cni-49691db2-a9cb-3a01-4b92-ac05a39d3303" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.651 [INFO][3373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.651 [INFO][3373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.762 [INFO][3384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" HandleID="k8s-pod-network.197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Workload="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.762 [INFO][3384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.762 [INFO][3384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.772 [WARNING][3384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" HandleID="k8s-pod-network.197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Workload="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.772 [INFO][3384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" HandleID="k8s-pod-network.197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Workload="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.773 [INFO][3384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:23:52.778148 env[1315]: 2025-09-13 00:23:52.775 [INFO][3373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:23:52.778148 env[1315]: time="2025-09-13T00:23:52.777605200Z" level=info msg="TearDown network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\" successfully" Sep 13 00:23:52.778148 env[1315]: time="2025-09-13T00:23:52.777638879Z" level=info msg="StopPodSandbox for \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\" returns successfully" Sep 13 00:23:52.938645 systemd[1]: run-netns-cni\x2d49691db2\x2da9cb\x2d3a01\x2d4b92\x2dac05a39d3303.mount: Deactivated successfully. Sep 13 00:23:52.970013 kubelet[2096]: I0913 00:23:52.969955 2096 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4vx9\" (UniqueName: \"kubernetes.io/projected/87e4e83a-e9a0-426d-ae5e-a20862c0016b-kube-api-access-j4vx9\") pod \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\" (UID: \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\") " Sep 13 00:23:52.970013 kubelet[2096]: I0913 00:23:52.970017 2096 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87e4e83a-e9a0-426d-ae5e-a20862c0016b-whisker-backend-key-pair\") pod \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\" (UID: \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\") " Sep 13 00:23:52.970162 kubelet[2096]: I0913 00:23:52.970044 2096 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e4e83a-e9a0-426d-ae5e-a20862c0016b-whisker-ca-bundle\") pod \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\" (UID: \"87e4e83a-e9a0-426d-ae5e-a20862c0016b\") " Sep 13 00:23:52.975076 kubelet[2096]: I0913 00:23:52.975017 2096 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87e4e83a-e9a0-426d-ae5e-a20862c0016b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "87e4e83a-e9a0-426d-ae5e-a20862c0016b" (UID: "87e4e83a-e9a0-426d-ae5e-a20862c0016b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:23:52.978712 systemd[1]: var-lib-kubelet-pods-87e4e83a\x2de9a0\x2d426d\x2dae5e\x2da20862c0016b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj4vx9.mount: Deactivated successfully. Sep 13 00:23:52.978868 systemd[1]: var-lib-kubelet-pods-87e4e83a\x2de9a0\x2d426d\x2dae5e\x2da20862c0016b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:23:52.980717 kubelet[2096]: I0913 00:23:52.980543 2096 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e4e83a-e9a0-426d-ae5e-a20862c0016b-kube-api-access-j4vx9" (OuterVolumeSpecName: "kube-api-access-j4vx9") pod "87e4e83a-e9a0-426d-ae5e-a20862c0016b" (UID: "87e4e83a-e9a0-426d-ae5e-a20862c0016b"). InnerVolumeSpecName "kube-api-access-j4vx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:23:52.981249 kubelet[2096]: I0913 00:23:52.981196 2096 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e4e83a-e9a0-426d-ae5e-a20862c0016b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "87e4e83a-e9a0-426d-ae5e-a20862c0016b" (UID: "87e4e83a-e9a0-426d-ae5e-a20862c0016b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:23:53.071099 kubelet[2096]: I0913 00:23:53.071032 2096 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4vx9\" (UniqueName: \"kubernetes.io/projected/87e4e83a-e9a0-426d-ae5e-a20862c0016b-kube-api-access-j4vx9\") on node \"localhost\" DevicePath \"\"" Sep 13 00:23:53.071099 kubelet[2096]: I0913 00:23:53.071075 2096 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87e4e83a-e9a0-426d-ae5e-a20862c0016b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:23:53.071099 kubelet[2096]: I0913 00:23:53.071085 2096 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e4e83a-e9a0-426d-ae5e-a20862c0016b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:23:53.690956 kubelet[2096]: I0913 00:23:53.690925 2096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:23:53.804000 audit[3442]: AVC avc: denied { write } for pid=3442 comm="tee" name="fd" dev="proc" ino=19889 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:23:53.804000 audit[3442]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd8d167e7 a2=241 a3=1b6 items=1 ppid=3421 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.810420 kernel: audit: type=1400 audit(1757723033.804:286): avc: denied { write } for pid=3442 comm="tee" name="fd" dev="proc" ino=19889 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:23:53.810512 kernel: audit: type=1300 audit(1757723033.804:286): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd8d167e7 a2=241 a3=1b6 items=1 ppid=3421 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.810537 kernel: audit: type=1307 audit(1757723033.804:286): cwd="/etc/service/enabled/cni/log" Sep 13 00:23:53.804000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 00:23:53.804000 audit: PATH item=0 name="/dev/fd/63" inode=18930 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:23:53.813090 kernel: audit: type=1302 audit(1757723033.804:286): item=0 name="/dev/fd/63" inode=18930 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:23:53.804000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:23:53.809000 audit[3460]: AVC avc: denied { write } for pid=3460 comm="tee" name="fd" dev="proc" ino=19897 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:23:53.809000 audit[3460]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6dcd7e5 a2=241 a3=1b6 items=1 ppid=3415 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.809000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 00:23:53.809000 audit: PATH item=0 name="/dev/fd/63" inode=19887 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:23:53.809000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:23:53.811000 audit[3476]: AVC avc: denied { write } for pid=3476 comm="tee" name="fd" dev="proc" ino=18945 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:23:53.811000 audit[3476]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcebe37e5 a2=241 a3=1b6 items=1 ppid=3413 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.811000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 00:23:53.811000 audit: PATH item=0 name="/dev/fd/63" inode=19893 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:23:53.811000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:23:53.811000 audit[3465]: AVC avc: denied { write } for pid=3465 comm="tee" name="fd" dev="proc" ino=18949 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:23:53.811000 audit[3465]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd73587e5 a2=241 a3=1b6 items=1 ppid=3414 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.811000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 00:23:53.811000 audit: PATH item=0 name="/dev/fd/63" inode=19888 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:23:53.811000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:23:53.814000 audit[3480]: AVC avc: denied { write } for pid=3480 comm="tee" name="fd" dev="proc" ino=18953 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:23:53.814000 audit[3480]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe69d97e6 a2=241 a3=1b6 items=1 ppid=3418 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.814000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 00:23:53.814000 audit: PATH item=0 name="/dev/fd/63" inode=19894 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:23:53.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:23:53.818000 audit[3481]: AVC avc: denied { write } for pid=3481 comm="tee" name="fd" dev="proc" ino=19903 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:23:53.818000 audit[3481]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd1ec57d5 a2=241 a3=1b6 items=1 ppid=3425 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.818000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:23:53.818000 audit: PATH item=0 name="/dev/fd/63" inode=18942 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:23:53.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:23:53.825000 audit[3469]: AVC avc: denied { write } for pid=3469 comm="tee" name="fd" dev="proc" ino=18959 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:23:53.825000 audit[3469]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffec6b37d6 a2=241 a3=1b6 items=1 ppid=3422 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.825000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:23:53.825000 audit: PATH item=0 name="/dev/fd/63" inode=18939 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:23:53.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:23:53.877530 kubelet[2096]: I0913 00:23:53.877478 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkdw\" (UniqueName: \"kubernetes.io/projected/f6149830-eebf-4aeb-8855-d8591f472497-kube-api-access-ghkdw\") pod \"whisker-7d5896b879-p9bgp\" (UID: \"f6149830-eebf-4aeb-8855-d8591f472497\") " pod="calico-system/whisker-7d5896b879-p9bgp" Sep 13 00:23:53.877530 kubelet[2096]: I0913 00:23:53.877527 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6149830-eebf-4aeb-8855-d8591f472497-whisker-ca-bundle\") pod \"whisker-7d5896b879-p9bgp\" (UID: \"f6149830-eebf-4aeb-8855-d8591f472497\") " pod="calico-system/whisker-7d5896b879-p9bgp" Sep 13 00:23:53.878003 kubelet[2096]: I0913 00:23:53.877549 2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f6149830-eebf-4aeb-8855-d8591f472497-whisker-backend-key-pair\") pod \"whisker-7d5896b879-p9bgp\" (UID: \"f6149830-eebf-4aeb-8855-d8591f472497\") " pod="calico-system/whisker-7d5896b879-p9bgp" Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit: BPF prog-id=10 op=LOAD Sep 13 00:23:53.980000 audit[3530]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0df5948 a2=98 a3=fffff0df5938 items=0 ppid=3417 pid=3530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.980000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:23:53.980000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit: BPF prog-id=11 op=LOAD Sep 13 00:23:53.980000 audit[3530]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0df57f8 a2=74 a3=95 items=0 ppid=3417 pid=3530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.980000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:23:53.980000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { bpf } for pid=3530 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit: BPF prog-id=12 op=LOAD Sep 13 00:23:53.980000 audit[3530]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0df5828 a2=40 a3=fffff0df5858 items=0 ppid=3417 pid=3530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.980000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:23:53.980000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:23:53.980000 audit[3530]: AVC avc: denied { perfmon } for pid=3530 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.980000 audit[3530]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=fffff0df5940 a2=50 a3=0 items=0 ppid=3417 pid=3530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.980000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:23:53.984000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.984000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.984000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.984000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.984000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.984000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.984000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.984000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.984000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.984000 audit: BPF prog-id=13 op=LOAD Sep 13 00:23:53.984000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd870988 a2=98 a3=ffffcd870978 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.984000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:53.985000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit: BPF prog-id=14 op=LOAD Sep 13 00:23:53.985000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd870618 a2=74 a3=95 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.985000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:53.985000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:53.985000 audit: BPF prog-id=15 op=LOAD Sep 13 00:23:53.985000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd870678 a2=94 a3=2 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:53.985000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:53.985000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:23:54.033240 env[1315]: time="2025-09-13T00:23:54.033194525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d5896b879-p9bgp,Uid:f6149830-eebf-4aeb-8855-d8591f472497,Namespace:calico-system,Attempt:0,}" Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit: BPF prog-id=16 op=LOAD Sep 13 00:23:54.074000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd870638 a2=40 a3=ffffcd870668 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.074000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.074000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:23:54.074000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.074000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcd870750 a2=50 a3=0 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.074000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd8706a8 a2=28 a3=ffffcd8707d8 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd8706d8 a2=28 a3=ffffcd870808 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd870588 a2=28 a3=ffffcd8706b8 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd8706f8 a2=28 a3=ffffcd870828 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd8706d8 a2=28 a3=ffffcd870808 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd8706c8 a2=28 a3=ffffcd8707f8 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd8706f8 a2=28 a3=ffffcd870828 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd8706d8 a2=28 a3=ffffcd870808 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd8706f8 a2=28 a3=ffffcd870828 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd8706c8 a2=28 a3=ffffcd8707f8 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd870748 a2=28 a3=ffffcd870888 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcd870480 a2=50 a3=0 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit: BPF prog-id=17 op=LOAD Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd870488 a2=94 a3=5 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcd870590 a2=50 a3=0 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcd8706d8 a2=4 a3=3 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.083000 audit[3531]: AVC avc: denied { confidentiality } for pid=3531 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:23:54.083000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcd8706b8 a2=94 a3=6 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { confidentiality } for pid=3531 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:23:54.084000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcd86fe88 a2=94 a3=83 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { perfmon } for pid=3531 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { bpf } for pid=3531 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.084000 audit[3531]: AVC avc: denied { confidentiality } for pid=3531 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:23:54.084000 audit[3531]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcd86fe88 a2=94 a3=83 items=0 ppid=3417 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit: BPF prog-id=18 op=LOAD Sep 13 00:23:54.098000 audit[3546]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9b34858 a2=98 a3=ffffe9b34848 items=0 ppid=3417 pid=3546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.098000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:23:54.098000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit: BPF prog-id=19 op=LOAD Sep 13 00:23:54.098000 audit[3546]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9b34708 a2=74 a3=95 items=0 ppid=3417 pid=3546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.098000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:23:54.098000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { perfmon } for pid=3546 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit[3546]: AVC avc: denied { bpf } for pid=3546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.098000 audit: BPF prog-id=20 op=LOAD Sep 13 00:23:54.098000 audit[3546]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9b34738 a2=40 a3=ffffe9b34768 items=0 ppid=3417 pid=3546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.098000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:23:54.098000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:23:54.164701 systemd-networkd[1096]: vxlan.calico: Link UP Sep 13 00:23:54.164712 systemd-networkd[1096]: vxlan.calico: Gained carrier Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit: BPF prog-id=21 op=LOAD Sep 13 00:23:54.182000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc59b1388 a2=98 a3=ffffc59b1378 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.182000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit: BPF prog-id=22 op=LOAD Sep 13 00:23:54.182000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc59b1068 a2=74 a3=95 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.182000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.182000 audit: BPF prog-id=23 op=LOAD Sep 13 00:23:54.182000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc59b10c8 a2=94 a3=2 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc59b10f8 a2=28 a3=ffffc59b1228 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc59b1128 a2=28 a3=ffffc59b1258 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc59b0fd8 a2=28 a3=ffffc59b1108 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc59b1148 a2=28 a3=ffffc59b1278 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc59b1128 a2=28 a3=ffffc59b1258 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc59b1118 a2=28 a3=ffffc59b1248 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc59b1148 a2=28 a3=ffffc59b1278 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc59b1128 a2=28 a3=ffffc59b1258 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc59b1148 a2=28 a3=ffffc59b1278 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc59b1118 a2=28 a3=ffffc59b1248 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc59b1198 a2=28 a3=ffffc59b12d8 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit: BPF prog-id=24 op=LOAD Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc59b0fb8 a2=40 a3=ffffc59b0fe8 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.183000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:23:54.183000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.183000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffc59b0fe0 a2=50 a3=0 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffc59b0fe0 a2=50 a3=0 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.184000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit: BPF prog-id=25 op=LOAD Sep 13 00:23:54.184000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc59b0748 a2=94 a3=2 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.184000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.184000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.184000 audit: BPF prog-id=26 op=LOAD Sep 13 00:23:54.184000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc59b08d8 a2=94 a3=30 items=0 ppid=3417 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.184000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:23:54.186000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.186000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.186000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.186000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.186000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.186000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.186000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.186000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.186000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.186000 audit: BPF prog-id=27 op=LOAD Sep 13 00:23:54.186000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff9619ca8 a2=98 a3=fffff9619c98 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.187000 audit: BPF prog-id=27 op=UNLOAD Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit: BPF prog-id=28 op=LOAD Sep 13 00:23:54.187000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff9619938 a2=74 a3=95 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.187000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.187000 audit: BPF prog-id=28 op=UNLOAD Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.187000 audit: BPF prog-id=29 op=LOAD Sep 13 00:23:54.187000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff9619998 a2=94 a3=2 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.187000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.187000 audit: BPF prog-id=29 op=UNLOAD Sep 13 00:23:54.219159 systemd-networkd[1096]: cali9c8ff7bff69: Link UP Sep 13 00:23:54.220209 systemd-networkd[1096]: cali9c8ff7bff69: Gained carrier Sep 13 00:23:54.220433 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9c8ff7bff69: link becomes ready Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.124 [INFO][3534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7d5896b879--p9bgp-eth0 whisker-7d5896b879- calico-system f6149830-eebf-4aeb-8855-d8591f472497 934 0 2025-09-13 00:23:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d5896b879 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7d5896b879-p9bgp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9c8ff7bff69 [] [] }} ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Namespace="calico-system" Pod="whisker-7d5896b879-p9bgp" WorkloadEndpoint="localhost-k8s-whisker--7d5896b879--p9bgp-" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.124 [INFO][3534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Namespace="calico-system" Pod="whisker-7d5896b879-p9bgp" WorkloadEndpoint="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.165 [INFO][3559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" HandleID="k8s-pod-network.b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Workload="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.166 [INFO][3559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" HandleID="k8s-pod-network.b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Workload="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005996d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7d5896b879-p9bgp", "timestamp":"2025-09-13 00:23:54.165902213 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.166 [INFO][3559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.166 [INFO][3559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.166 [INFO][3559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.186 [INFO][3559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" host="localhost" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.194 [INFO][3559] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.198 [INFO][3559] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.200 [INFO][3559] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.202 [INFO][3559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.202 [INFO][3559] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" host="localhost" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.203 [INFO][3559] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957 Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.207 [INFO][3559] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" host="localhost" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.211 [INFO][3559] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" host="localhost" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.211 [INFO][3559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" host="localhost" Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.211 [INFO][3559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:23:54.232886 env[1315]: 2025-09-13 00:23:54.212 [INFO][3559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" HandleID="k8s-pod-network.b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Workload="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" Sep 13 00:23:54.233484 env[1315]: 2025-09-13 00:23:54.216 [INFO][3534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Namespace="calico-system" Pod="whisker-7d5896b879-p9bgp" WorkloadEndpoint="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d5896b879--p9bgp-eth0", GenerateName:"whisker-7d5896b879-", Namespace:"calico-system", SelfLink:"", UID:"f6149830-eebf-4aeb-8855-d8591f472497", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d5896b879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7d5896b879-p9bgp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9c8ff7bff69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:23:54.233484 env[1315]: 2025-09-13 00:23:54.216 [INFO][3534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Namespace="calico-system" Pod="whisker-7d5896b879-p9bgp" WorkloadEndpoint="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" Sep 13 00:23:54.233484 env[1315]: 2025-09-13 00:23:54.216 [INFO][3534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c8ff7bff69 ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Namespace="calico-system" Pod="whisker-7d5896b879-p9bgp" WorkloadEndpoint="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" Sep 13 00:23:54.233484 env[1315]: 2025-09-13 00:23:54.220 [INFO][3534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Namespace="calico-system" Pod="whisker-7d5896b879-p9bgp" WorkloadEndpoint="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" Sep 13 00:23:54.233484 env[1315]: 2025-09-13 00:23:54.221 [INFO][3534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Namespace="calico-system" Pod="whisker-7d5896b879-p9bgp" WorkloadEndpoint="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d5896b879--p9bgp-eth0", GenerateName:"whisker-7d5896b879-", Namespace:"calico-system", SelfLink:"", UID:"f6149830-eebf-4aeb-8855-d8591f472497", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d5896b879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957", Pod:"whisker-7d5896b879-p9bgp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9c8ff7bff69", MAC:"62:80:78:c0:b7:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:23:54.233484 env[1315]: 2025-09-13 00:23:54.230 [INFO][3534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957" Namespace="calico-system" Pod="whisker-7d5896b879-p9bgp" WorkloadEndpoint="localhost-k8s-whisker--7d5896b879--p9bgp-eth0" Sep 13 00:23:54.248317 env[1315]: time="2025-09-13T00:23:54.248175049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:54.248317 env[1315]: time="2025-09-13T00:23:54.248222609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:54.248317 env[1315]: time="2025-09-13T00:23:54.248233209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:54.262302 env[1315]: time="2025-09-13T00:23:54.253604279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957 pid=3607 runtime=io.containerd.runc.v2 Sep 13 00:23:54.289758 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit: BPF prog-id=30 op=LOAD Sep 13 00:23:54.292000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff9619958 a2=40 a3=fffff9619988 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.292000 audit: BPF prog-id=30 op=UNLOAD Sep 13 00:23:54.292000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.292000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff9619a70 a2=50 a3=0 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff96199c8 a2=28 a3=fffff9619af8 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff96199f8 a2=28 a3=fffff9619b28 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff96198a8 a2=28 a3=fffff96199d8 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff9619a18 a2=28 a3=fffff9619b48 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff96199f8 a2=28 a3=fffff9619b28 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff96199e8 a2=28 a3=fffff9619b18 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff9619a18 a2=28 a3=fffff9619b48 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff96199f8 a2=28 a3=fffff9619b28 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff9619a18 a2=28 a3=fffff9619b48 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff96199e8 a2=28 a3=fffff9619b18 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff9619a68 a2=28 a3=fffff9619ba8 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff96197a0 a2=50 a3=0 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.301000 audit: BPF prog-id=31 op=LOAD Sep 13 00:23:54.301000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff96197a8 a2=94 a3=5 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.302000 audit: BPF prog-id=31 op=UNLOAD Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff96198b0 a2=50 a3=0 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.302000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff96199f8 a2=4 a3=3 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.302000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { confidentiality } for pid=3588 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:23:54.302000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff96199d8 a2=94 a3=6 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.302000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { confidentiality } for pid=3588 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:23:54.302000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff96191a8 a2=94 a3=83 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.302000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { perfmon } for pid=3588 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.302000 audit[3588]: AVC avc: denied { confidentiality } for pid=3588 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:23:54.302000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff96191a8 a2=94 a3=83 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.302000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.303000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.303000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff961abe8 a2=10 a3=fffff961acd8 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.303000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.303000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff961aaa8 a2=10 a3=fffff961ab98 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.303000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.303000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff961aa18 a2=10 a3=fffff961ab98 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.303000 audit[3588]: AVC avc: denied { bpf } for pid=3588 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:23:54.303000 audit[3588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff961aa18 a2=10 a3=fffff961ab98 items=0 ppid=3417 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:23:54.310000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:23:54.313754 env[1315]: time="2025-09-13T00:23:54.313712201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d5896b879-p9bgp,Uid:f6149830-eebf-4aeb-8855-d8591f472497,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957\"" Sep 13 00:23:54.316695 env[1315]: time="2025-09-13T00:23:54.316662894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:23:54.354000 audit[3667]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3667 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:23:54.354000 audit[3667]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffffeae9ca0 a2=0 a3=ffffa8918fa8 items=0 ppid=3417 pid=3667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.354000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:23:54.358000 audit[3664]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3664 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:23:54.358000 audit[3664]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc2074b80 a2=0 a3=ffff871b2fa8 items=0 ppid=3417 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.358000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:23:54.361000 audit[3663]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=3663 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:23:54.361000 audit[3663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff96ad450 a2=0 a3=ffff9418cfa8 items=0 ppid=3417 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.361000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:23:54.362000 audit[3668]: NETFILTER_CFG table=filter:104 family=2 entries=39 op=nft_register_chain pid=3668 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:23:54.362000 audit[3668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=fffff832dd60 a2=0 a3=ffffab596fa8 items=0 ppid=3417 pid=3668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.362000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:23:54.387000 audit[3675]: NETFILTER_CFG table=filter:105 family=2 entries=59 op=nft_register_chain pid=3675 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:23:54.387000 audit[3675]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=35860 a0=3 a1=ffffe5aa2e20 a2=0 a3=ffff9bf69fa8 items=0 ppid=3417 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:54.387000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:23:54.570260 kubelet[2096]: I0913 00:23:54.570200 2096 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e4e83a-e9a0-426d-ae5e-a20862c0016b" path="/var/lib/kubelet/pods/87e4e83a-e9a0-426d-ae5e-a20862c0016b/volumes" Sep 13 00:23:55.200052 env[1315]: time="2025-09-13T00:23:55.199993004Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:55.201916 env[1315]: time="2025-09-13T00:23:55.201889987Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:55.203635 env[1315]: time="2025-09-13T00:23:55.203597292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:55.206621 env[1315]: time="2025-09-13T00:23:55.206583345Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:55.207241 env[1315]: time="2025-09-13T00:23:55.207214060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:23:55.211006 env[1315]: time="2025-09-13T00:23:55.210712669Z" level=info msg="CreateContainer within sandbox \"b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:23:55.221855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80325510.mount: Deactivated successfully. Sep 13 00:23:55.224970 env[1315]: time="2025-09-13T00:23:55.224895702Z" level=info msg="CreateContainer within sandbox \"b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c79e5225007f8a44393122edb848678f9110dbbed82b3fb0db7c240f3d791aaf\"" Sep 13 00:23:55.225475 env[1315]: time="2025-09-13T00:23:55.225450217Z" level=info msg="StartContainer for \"c79e5225007f8a44393122edb848678f9110dbbed82b3fb0db7c240f3d791aaf\"" Sep 13 00:23:55.285096 env[1315]: time="2025-09-13T00:23:55.285051484Z" level=info msg="StartContainer for \"c79e5225007f8a44393122edb848678f9110dbbed82b3fb0db7c240f3d791aaf\" returns successfully" Sep 13 00:23:55.286431 env[1315]: time="2025-09-13T00:23:55.286358633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:23:55.432606 systemd-networkd[1096]: vxlan.calico: Gained IPv6LL Sep 13 00:23:55.496546 systemd-networkd[1096]: cali9c8ff7bff69: Gained IPv6LL Sep 13 00:23:56.502182 kubelet[2096]: I0913 00:23:56.501649 2096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:23:56.541711 systemd[1]: run-containerd-runc-k8s.io-c15bfcaebaec07fca55fa274eb1be991b378b82fc55344d4383e1a3bc3617055-runc.VII9MT.mount: Deactivated successfully. Sep 13 00:23:56.838129 env[1315]: time="2025-09-13T00:23:56.838070843Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:56.840480 env[1315]: time="2025-09-13T00:23:56.840436222Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:56.842283 env[1315]: time="2025-09-13T00:23:56.842246007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:56.844129 env[1315]: time="2025-09-13T00:23:56.844094951Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:23:56.844750 env[1315]: time="2025-09-13T00:23:56.844711985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:23:56.847635 env[1315]: time="2025-09-13T00:23:56.847600920Z" level=info msg="CreateContainer within sandbox \"b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:23:56.860838 env[1315]: time="2025-09-13T00:23:56.860771367Z" level=info msg="CreateContainer within sandbox \"b4e13352b990ff8f9dbef169ec04d2743ecad75ccdad35d163a05170834f3957\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"711306d163a9a687e7a0c960f51693e3e6a93112b006db97b9c0e94ccbe9535f\"" Sep 13 00:23:56.861654 env[1315]: time="2025-09-13T00:23:56.861623240Z" level=info msg="StartContainer for \"711306d163a9a687e7a0c960f51693e3e6a93112b006db97b9c0e94ccbe9535f\"" Sep 13 00:23:56.923496 env[1315]: time="2025-09-13T00:23:56.923439268Z" level=info msg="StartContainer for \"711306d163a9a687e7a0c960f51693e3e6a93112b006db97b9c0e94ccbe9535f\" returns successfully" Sep 13 00:23:57.208301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107940096.mount: Deactivated successfully. Sep 13 00:23:57.722491 kubelet[2096]: I0913 00:23:57.722422 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7d5896b879-p9bgp" podStartSLOduration=2.192761812 podStartE2EDuration="4.72240589s" podCreationTimestamp="2025-09-13 00:23:53 +0000 UTC" firstStartedPulling="2025-09-13 00:23:54.316300217 +0000 UTC m=+31.883700516" lastFinishedPulling="2025-09-13 00:23:56.845944255 +0000 UTC m=+34.413344594" observedRunningTime="2025-09-13 00:23:57.719715233 +0000 UTC m=+35.287115572" watchObservedRunningTime="2025-09-13 00:23:57.72240589 +0000 UTC m=+35.289806229" Sep 13 00:23:57.739000 audit[3800]: NETFILTER_CFG table=filter:106 family=2 entries=19 op=nft_register_rule pid=3800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:57.740543 kernel: kauditd_printk_skb: 556 callbacks suppressed Sep 13 00:23:57.740620 kernel: audit: type=1325 audit(1757723037.739:396): table=filter:106 family=2 entries=19 op=nft_register_rule pid=3800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:57.739000 audit[3800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe9dc64e0 a2=0 a3=1 items=0 ppid=2207 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:57.745123 kernel: audit: type=1300 audit(1757723037.739:396): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe9dc64e0 a2=0 a3=1 items=0 ppid=2207 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:57.745205 kernel: audit: type=1327 audit(1757723037.739:396): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:57.739000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:57.749000 audit[3800]: NETFILTER_CFG table=nat:107 family=2 entries=21 op=nft_register_chain pid=3800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:57.749000 audit[3800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=ffffe9dc64e0 a2=0 a3=1 items=0 ppid=2207 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:57.755339 kernel: audit: type=1325 audit(1757723037.749:397): table=nat:107 family=2 entries=21 op=nft_register_chain pid=3800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:23:57.755448 kernel: audit: type=1300 audit(1757723037.749:397): arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=ffffe9dc64e0 a2=0 a3=1 items=0 ppid=2207 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:57.755472 kernel: audit: type=1327 audit(1757723037.749:397): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:57.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:23:59.568260 env[1315]: time="2025-09-13T00:23:59.568218646Z" level=info msg="StopPodSandbox for \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\"" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.620 [INFO][3823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.620 [INFO][3823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" iface="eth0" netns="/var/run/netns/cni-c73ca0be-b9d0-0915-f1bc-0bb7f1dbb6b9" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.620 [INFO][3823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" iface="eth0" netns="/var/run/netns/cni-c73ca0be-b9d0-0915-f1bc-0bb7f1dbb6b9" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.620 [INFO][3823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" iface="eth0" netns="/var/run/netns/cni-c73ca0be-b9d0-0915-f1bc-0bb7f1dbb6b9" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.620 [INFO][3823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.620 [INFO][3823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.641 [INFO][3831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" HandleID="k8s-pod-network.4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.641 [INFO][3831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.641 [INFO][3831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.651 [WARNING][3831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" HandleID="k8s-pod-network.4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.651 [INFO][3831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" HandleID="k8s-pod-network.4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.653 [INFO][3831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:23:59.660181 env[1315]: 2025-09-13 00:23:59.655 [INFO][3823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:23:59.662930 systemd[1]: run-netns-cni\x2dc73ca0be\x2db9d0\x2d0915\x2df1bc\x2d0bb7f1dbb6b9.mount: Deactivated successfully. Sep 13 00:23:59.664515 env[1315]: time="2025-09-13T00:23:59.664376621Z" level=info msg="TearDown network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\" successfully" Sep 13 00:23:59.664609 env[1315]: time="2025-09-13T00:23:59.664513940Z" level=info msg="StopPodSandbox for \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\" returns successfully" Sep 13 00:23:59.665193 kubelet[2096]: E0913 00:23:59.665165 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:59.665978 env[1315]: time="2025-09-13T00:23:59.665934369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c5t9w,Uid:c0c72a36-f574-485d-b83f-4271860bd697,Namespace:kube-system,Attempt:1,}" Sep 13 00:23:59.818010 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:23:59.818125 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid587c6b0ea0: link becomes ready Sep 13 00:23:59.818332 systemd-networkd[1096]: calid587c6b0ea0: Link UP Sep 13 00:23:59.818544 systemd-networkd[1096]: calid587c6b0ea0: Gained carrier Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.733 [INFO][3839] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0 coredns-7c65d6cfc9- kube-system c0c72a36-f574-485d-b83f-4271860bd697 967 0 2025-09-13 00:23:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-c5t9w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid587c6b0ea0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c5t9w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c5t9w-" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.733 [INFO][3839] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c5t9w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.763 [INFO][3855] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" HandleID="k8s-pod-network.d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.764 [INFO][3855] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" HandleID="k8s-pod-network.d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-c5t9w", "timestamp":"2025-09-13 00:23:59.76391493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.764 [INFO][3855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.764 [INFO][3855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.764 [INFO][3855] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.773 [INFO][3855] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" host="localhost" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.780 [INFO][3855] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.786 [INFO][3855] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.789 [INFO][3855] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.792 [INFO][3855] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.792 [INFO][3855] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" host="localhost" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.796 [INFO][3855] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5 Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.805 [INFO][3855] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" host="localhost" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.812 [INFO][3855] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" host="localhost" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.812 [INFO][3855] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" host="localhost" Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.812 [INFO][3855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:23:59.839084 env[1315]: 2025-09-13 00:23:59.812 [INFO][3855] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" HandleID="k8s-pod-network.d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.839701 env[1315]: 2025-09-13 00:23:59.814 [INFO][3839] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c5t9w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c0c72a36-f574-485d-b83f-4271860bd697", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-c5t9w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid587c6b0ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:23:59.839701 env[1315]: 2025-09-13 00:23:59.814 [INFO][3839] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c5t9w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.839701 env[1315]: 2025-09-13 00:23:59.814 [INFO][3839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid587c6b0ea0 ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c5t9w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.839701 env[1315]: 2025-09-13 00:23:59.822 [INFO][3839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c5t9w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.839701 env[1315]: 2025-09-13 00:23:59.822 [INFO][3839] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c5t9w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c0c72a36-f574-485d-b83f-4271860bd697", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5", Pod:"coredns-7c65d6cfc9-c5t9w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid587c6b0ea0", MAC:"6e:eb:26:90:87:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:23:59.839701 env[1315]: 2025-09-13 00:23:59.836 [INFO][3839] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c5t9w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:23:59.850632 env[1315]: time="2025-09-13T00:23:59.850511219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:59.850632 env[1315]: time="2025-09-13T00:23:59.850578338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:59.850632 env[1315]: time="2025-09-13T00:23:59.850588818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:59.850837 env[1315]: time="2025-09-13T00:23:59.850770297Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5 pid=3883 runtime=io.containerd.runc.v2 Sep 13 00:23:59.850000 audit[3880]: NETFILTER_CFG table=filter:108 family=2 entries=42 op=nft_register_chain pid=3880 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:23:59.850000 audit[3880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffcc4b3890 a2=0 a3=ffff8888cfa8 items=0 ppid=3417 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:59.857363 kernel: audit: type=1325 audit(1757723039.850:398): table=filter:108 family=2 entries=42 op=nft_register_chain pid=3880 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:23:59.857473 kernel: audit: type=1300 audit(1757723039.850:398): arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffcc4b3890 a2=0 a3=ffff8888cfa8 items=0 ppid=3417 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:23:59.850000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:23:59.861636 kernel: audit: type=1327 audit(1757723039.850:398): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:23:59.895146 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:23:59.917073 env[1315]: time="2025-09-13T00:23:59.917035863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c5t9w,Uid:c0c72a36-f574-485d-b83f-4271860bd697,Namespace:kube-system,Attempt:1,} returns sandbox id \"d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5\"" Sep 13 00:23:59.918047 kubelet[2096]: E0913 00:23:59.918019 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:23:59.922130 env[1315]: time="2025-09-13T00:23:59.922090424Z" level=info msg="CreateContainer within sandbox \"d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:23:59.969226 env[1315]: time="2025-09-13T00:23:59.969164259Z" level=info msg="CreateContainer within sandbox \"d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5869309fe1b6c68fe89a583b700fa273c4d9dcf868f4e4dd5616590e2dd9c403\"" Sep 13 00:23:59.971059 env[1315]: time="2025-09-13T00:23:59.970123532Z" level=info msg="StartContainer for \"5869309fe1b6c68fe89a583b700fa273c4d9dcf868f4e4dd5616590e2dd9c403\"" Sep 13 00:24:00.018232 env[1315]: time="2025-09-13T00:24:00.018174044Z" level=info msg="StartContainer for \"5869309fe1b6c68fe89a583b700fa273c4d9dcf868f4e4dd5616590e2dd9c403\" returns successfully" Sep 13 00:24:00.568756 env[1315]: time="2025-09-13T00:24:00.568717157Z" level=info msg="StopPodSandbox for \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\"" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.622 [INFO][3967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.623 [INFO][3967] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" iface="eth0" netns="/var/run/netns/cni-30048509-74f4-efb0-8b34-d1b04907afaf" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.623 [INFO][3967] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" iface="eth0" netns="/var/run/netns/cni-30048509-74f4-efb0-8b34-d1b04907afaf" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.624 [INFO][3967] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" iface="eth0" netns="/var/run/netns/cni-30048509-74f4-efb0-8b34-d1b04907afaf" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.624 [INFO][3967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.624 [INFO][3967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.649 [INFO][3977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" HandleID="k8s-pod-network.fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.649 [INFO][3977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.649 [INFO][3977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.663 [WARNING][3977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" HandleID="k8s-pod-network.fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.663 [INFO][3977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" HandleID="k8s-pod-network.fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.665 [INFO][3977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:00.670707 env[1315]: 2025-09-13 00:24:00.668 [INFO][3967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:00.672543 env[1315]: time="2025-09-13T00:24:00.672487579Z" level=info msg="TearDown network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\" successfully" Sep 13 00:24:00.672543 env[1315]: time="2025-09-13T00:24:00.672530539Z" level=info msg="StopPodSandbox for \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\" returns successfully" Sep 13 00:24:00.673422 env[1315]: time="2025-09-13T00:24:00.673309373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59db8f9d95-s27wv,Uid:e90ef52a-67ed-4ab0-b978-c57c4259dadf,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:24:00.674878 systemd[1]: run-netns-cni\x2d30048509\x2d74f4\x2defb0\x2d8b34\x2dd1b04907afaf.mount: Deactivated successfully. Sep 13 00:24:00.721603 kubelet[2096]: E0913 00:24:00.721433 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:00.761031 kubelet[2096]: I0913 00:24:00.760846 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c5t9w" podStartSLOduration=32.760827477 podStartE2EDuration="32.760827477s" podCreationTimestamp="2025-09-13 00:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:00.743337648 +0000 UTC m=+38.310738027" watchObservedRunningTime="2025-09-13 00:24:00.760827477 +0000 UTC m=+38.328227816" Sep 13 00:24:00.764000 audit[4003]: NETFILTER_CFG table=filter:109 family=2 entries=18 op=nft_register_rule pid=4003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:00.764000 audit[4003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffb162490 a2=0 a3=1 items=0 ppid=2207 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:00.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:00.767410 kernel: audit: type=1325 audit(1757723040.764:399): table=filter:109 family=2 entries=18 op=nft_register_rule pid=4003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:00.771000 audit[4003]: NETFILTER_CFG table=nat:110 family=2 entries=16 op=nft_register_rule pid=4003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:00.771000 audit[4003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=fffffb162490 a2=0 a3=1 items=0 ppid=2207 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:00.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:00.782000 audit[4008]: NETFILTER_CFG table=filter:111 family=2 entries=15 op=nft_register_rule pid=4008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:00.782000 audit[4008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffeb824fd0 a2=0 a3=1 items=0 ppid=2207 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:00.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:00.789000 audit[4008]: NETFILTER_CFG table=nat:112 family=2 entries=37 op=nft_register_chain pid=4008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:00.789000 audit[4008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=ffffeb824fd0 a2=0 a3=1 items=0 ppid=2207 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:00.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:00.833529 systemd-networkd[1096]: cali31cd1eb8a51: Link UP Sep 13 00:24:00.836301 systemd-networkd[1096]: cali31cd1eb8a51: Gained carrier Sep 13 00:24:00.836449 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:24:00.836490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali31cd1eb8a51: link becomes ready Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.742 [INFO][3984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0 calico-apiserver-59db8f9d95- calico-apiserver e90ef52a-67ed-4ab0-b978-c57c4259dadf 980 0 2025-09-13 00:23:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59db8f9d95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59db8f9d95-s27wv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali31cd1eb8a51 [] [] }} ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-s27wv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.742 [INFO][3984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-s27wv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.786 [INFO][4000] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" HandleID="k8s-pod-network.8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.786 [INFO][4000] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" HandleID="k8s-pod-network.8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59db8f9d95-s27wv", "timestamp":"2025-09-13 00:24:00.786186047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.786 [INFO][4000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.786 [INFO][4000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.786 [INFO][4000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.796 [INFO][4000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" host="localhost" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.805 [INFO][4000] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.811 [INFO][4000] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.814 [INFO][4000] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.816 [INFO][4000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.816 [INFO][4000] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" host="localhost" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.818 [INFO][4000] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370 Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.822 [INFO][4000] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" host="localhost" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.829 [INFO][4000] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" host="localhost" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.829 [INFO][4000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" host="localhost" Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.829 [INFO][4000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:00.851786 env[1315]: 2025-09-13 00:24:00.829 [INFO][4000] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" HandleID="k8s-pod-network.8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.852407 env[1315]: 2025-09-13 00:24:00.832 [INFO][3984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-s27wv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0", GenerateName:"calico-apiserver-59db8f9d95-", Namespace:"calico-apiserver", SelfLink:"", UID:"e90ef52a-67ed-4ab0-b978-c57c4259dadf", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59db8f9d95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59db8f9d95-s27wv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31cd1eb8a51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:00.852407 env[1315]: 2025-09-13 00:24:00.832 [INFO][3984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-s27wv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.852407 env[1315]: 2025-09-13 00:24:00.832 [INFO][3984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31cd1eb8a51 ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-s27wv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.852407 env[1315]: 2025-09-13 00:24:00.836 [INFO][3984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-s27wv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.852407 env[1315]: 2025-09-13 00:24:00.837 [INFO][3984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-s27wv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0", GenerateName:"calico-apiserver-59db8f9d95-", Namespace:"calico-apiserver", SelfLink:"", UID:"e90ef52a-67ed-4ab0-b978-c57c4259dadf", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59db8f9d95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370", Pod:"calico-apiserver-59db8f9d95-s27wv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31cd1eb8a51", MAC:"5a:8e:0e:2c:ec:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:00.852407 env[1315]: 2025-09-13 00:24:00.849 [INFO][3984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-s27wv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:00.862747 env[1315]: time="2025-09-13T00:24:00.862681353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:00.862747 env[1315]: time="2025-09-13T00:24:00.862724473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:00.862747 env[1315]: time="2025-09-13T00:24:00.862734473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:00.863008 env[1315]: time="2025-09-13T00:24:00.862959031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370 pid=4027 runtime=io.containerd.runc.v2 Sep 13 00:24:00.869000 audit[4038]: NETFILTER_CFG table=filter:113 family=2 entries=54 op=nft_register_chain pid=4038 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:24:00.869000 audit[4038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=fffffad119b0 a2=0 a3=ffffb3ca1fa8 items=0 ppid=3417 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:00.869000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:24:00.872797 systemd-networkd[1096]: calid587c6b0ea0: Gained IPv6LL Sep 13 00:24:00.899985 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:24:00.918675 env[1315]: time="2025-09-13T00:24:00.918607334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59db8f9d95-s27wv,Uid:e90ef52a-67ed-4ab0-b978-c57c4259dadf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370\"" Sep 13 00:24:00.922217 env[1315]: time="2025-09-13T00:24:00.922181467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:24:01.568153 env[1315]: time="2025-09-13T00:24:01.568110639Z" level=info msg="StopPodSandbox for \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\"" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.619 [INFO][4072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.619 [INFO][4072] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" iface="eth0" netns="/var/run/netns/cni-4b70ceed-5a7b-5e4b-9cf5-8750e0ec3cbf" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.619 [INFO][4072] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" iface="eth0" netns="/var/run/netns/cni-4b70ceed-5a7b-5e4b-9cf5-8750e0ec3cbf" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.620 [INFO][4072] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" iface="eth0" netns="/var/run/netns/cni-4b70ceed-5a7b-5e4b-9cf5-8750e0ec3cbf" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.620 [INFO][4072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.620 [INFO][4072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.664 [INFO][4080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" HandleID="k8s-pod-network.b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.664 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.664 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.673 [WARNING][4080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" HandleID="k8s-pod-network.b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.673 [INFO][4080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" HandleID="k8s-pod-network.b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.675 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:01.680912 env[1315]: 2025-09-13 00:24:01.677 [INFO][4072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:01.681800 env[1315]: time="2025-09-13T00:24:01.681759014Z" level=info msg="TearDown network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\" successfully" Sep 13 00:24:01.681882 env[1315]: time="2025-09-13T00:24:01.681865653Z" level=info msg="StopPodSandbox for \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\" returns successfully" Sep 13 00:24:01.682634 env[1315]: time="2025-09-13T00:24:01.682602928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-r22kc,Uid:893d60eb-0d9b-45af-8fda-3d0f54249b41,Namespace:calico-system,Attempt:1,}" Sep 13 00:24:01.684262 systemd[1]: run-netns-cni\x2d4b70ceed\x2d5a7b\x2d5e4b\x2d9cf5\x2d8750e0ec3cbf.mount: Deactivated successfully. Sep 13 00:24:01.778654 kubelet[2096]: E0913 00:24:01.778539 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:01.912263 systemd-networkd[1096]: cali4447082197a: Link UP Sep 13 00:24:01.914835 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:24:01.914929 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4447082197a: link becomes ready Sep 13 00:24:01.915061 systemd-networkd[1096]: cali4447082197a: Gained carrier Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.795 [INFO][4088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--r22kc-eth0 goldmane-7988f88666- calico-system 893d60eb-0d9b-45af-8fda-3d0f54249b41 998 0 2025-09-13 00:23:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-r22kc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4447082197a [] [] }} ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Namespace="calico-system" Pod="goldmane-7988f88666-r22kc" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--r22kc-" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.795 [INFO][4088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Namespace="calico-system" Pod="goldmane-7988f88666-r22kc" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.858 [INFO][4103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" HandleID="k8s-pod-network.09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.858 [INFO][4103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" HandleID="k8s-pod-network.09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000506fb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-r22kc", "timestamp":"2025-09-13 00:24:01.85865041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.858 [INFO][4103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.858 [INFO][4103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.858 [INFO][4103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.869 [INFO][4103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" host="localhost" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.881 [INFO][4103] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.886 [INFO][4103] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.888 [INFO][4103] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.890 [INFO][4103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.890 [INFO][4103] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" host="localhost" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.892 [INFO][4103] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.899 [INFO][4103] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" host="localhost" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.907 [INFO][4103] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" host="localhost" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.907 [INFO][4103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" host="localhost" Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.907 [INFO][4103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:01.938134 env[1315]: 2025-09-13 00:24:01.907 [INFO][4103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" HandleID="k8s-pod-network.09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.938822 env[1315]: 2025-09-13 00:24:01.910 [INFO][4088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Namespace="calico-system" Pod="goldmane-7988f88666-r22kc" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--r22kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--r22kc-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"893d60eb-0d9b-45af-8fda-3d0f54249b41", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-r22kc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4447082197a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:01.938822 env[1315]: 2025-09-13 00:24:01.910 [INFO][4088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Namespace="calico-system" Pod="goldmane-7988f88666-r22kc" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.938822 env[1315]: 2025-09-13 00:24:01.910 [INFO][4088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4447082197a ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Namespace="calico-system" Pod="goldmane-7988f88666-r22kc" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.938822 env[1315]: 2025-09-13 00:24:01.912 [INFO][4088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Namespace="calico-system" Pod="goldmane-7988f88666-r22kc" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.938822 env[1315]: 2025-09-13 00:24:01.913 [INFO][4088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Namespace="calico-system" Pod="goldmane-7988f88666-r22kc" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--r22kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--r22kc-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"893d60eb-0d9b-45af-8fda-3d0f54249b41", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b", Pod:"goldmane-7988f88666-r22kc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4447082197a", MAC:"3e:5f:27:e5:6e:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:01.938822 env[1315]: 2025-09-13 00:24:01.932 [INFO][4088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b" Namespace="calico-system" Pod="goldmane-7988f88666-r22kc" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:01.946000 audit[4118]: NETFILTER_CFG table=filter:114 family=2 entries=52 op=nft_register_chain pid=4118 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:24:01.946000 audit[4118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27556 a0=3 a1=ffffe6caaa20 a2=0 a3=ffffaba4dfa8 items=0 ppid=3417 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:01.946000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:24:01.953499 env[1315]: time="2025-09-13T00:24:01.953416722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:01.953499 env[1315]: time="2025-09-13T00:24:01.953464801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:01.953499 env[1315]: time="2025-09-13T00:24:01.953475001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:01.953944 env[1315]: time="2025-09-13T00:24:01.953901998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b pid=4127 runtime=io.containerd.runc.v2 Sep 13 00:24:02.025922 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:24:02.073846 env[1315]: time="2025-09-13T00:24:02.073798024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-r22kc,Uid:893d60eb-0d9b-45af-8fda-3d0f54249b41,Namespace:calico-system,Attempt:1,} returns sandbox id \"09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b\"" Sep 13 00:24:02.344802 systemd-networkd[1096]: cali31cd1eb8a51: Gained IPv6LL Sep 13 00:24:02.568742 env[1315]: time="2025-09-13T00:24:02.568688180Z" level=info msg="StopPodSandbox for \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\"" Sep 13 00:24:02.569130 env[1315]: time="2025-09-13T00:24:02.569099937Z" level=info msg="StopPodSandbox for \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\"" Sep 13 00:24:02.786577 kubelet[2096]: E0913 00:24:02.786423 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.658 [INFO][4182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.658 [INFO][4182] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" iface="eth0" netns="/var/run/netns/cni-55c01a99-3036-6e96-1660-c556d31f42ec" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.659 [INFO][4182] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" iface="eth0" netns="/var/run/netns/cni-55c01a99-3036-6e96-1660-c556d31f42ec" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.659 [INFO][4182] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" iface="eth0" netns="/var/run/netns/cni-55c01a99-3036-6e96-1660-c556d31f42ec" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.659 [INFO][4182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.659 [INFO][4182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.716 [INFO][4198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" HandleID="k8s-pod-network.1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.717 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.717 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.731 [WARNING][4198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" HandleID="k8s-pod-network.1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.732 [INFO][4198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" HandleID="k8s-pod-network.1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.734 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:02.795483 env[1315]: 2025-09-13 00:24:02.790 [INFO][4182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:02.798784 env[1315]: time="2025-09-13T00:24:02.798739281Z" level=info msg="TearDown network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\" successfully" Sep 13 00:24:02.798904 env[1315]: time="2025-09-13T00:24:02.798887600Z" level=info msg="StopPodSandbox for \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\" returns successfully" Sep 13 00:24:02.799694 systemd[1]: run-netns-cni\x2d55c01a99\x2d3036\x2d6e96\x2d1660\x2dc556d31f42ec.mount: Deactivated successfully. Sep 13 00:24:02.801508 env[1315]: time="2025-09-13T00:24:02.799686114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59db8f9d95-nhnkq,Uid:a5f5c1be-6355-4194-9496-65fe9b497b32,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.680 [INFO][4181] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.680 [INFO][4181] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" iface="eth0" netns="/var/run/netns/cni-4fa455cd-fb3f-e5ee-f9f2-1d850f8ce144" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.681 [INFO][4181] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" iface="eth0" netns="/var/run/netns/cni-4fa455cd-fb3f-e5ee-f9f2-1d850f8ce144" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.682 [INFO][4181] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" iface="eth0" netns="/var/run/netns/cni-4fa455cd-fb3f-e5ee-f9f2-1d850f8ce144" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.682 [INFO][4181] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.682 [INFO][4181] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.792 [INFO][4205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" HandleID="k8s-pod-network.69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.792 [INFO][4205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.792 [INFO][4205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.810 [WARNING][4205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" HandleID="k8s-pod-network.69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.810 [INFO][4205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" HandleID="k8s-pod-network.69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.811 [INFO][4205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:02.820414 env[1315]: 2025-09-13 00:24:02.815 [INFO][4181] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:02.821910 systemd[1]: run-netns-cni\x2d4fa455cd\x2dfb3f\x2de5ee\x2df9f2\x2d1d850f8ce144.mount: Deactivated successfully. Sep 13 00:24:02.822715 env[1315]: time="2025-09-13T00:24:02.822657512Z" level=info msg="TearDown network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\" successfully" Sep 13 00:24:02.822715 env[1315]: time="2025-09-13T00:24:02.822697272Z" level=info msg="StopPodSandbox for \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\" returns successfully" Sep 13 00:24:02.823244 kubelet[2096]: E0913 00:24:02.823206 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:02.824687 env[1315]: time="2025-09-13T00:24:02.824637498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfvmh,Uid:c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db,Namespace:kube-system,Attempt:1,}" Sep 13 00:24:02.936812 env[1315]: time="2025-09-13T00:24:02.936768229Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:02.938230 env[1315]: time="2025-09-13T00:24:02.938191459Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:02.940555 env[1315]: time="2025-09-13T00:24:02.940525243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:02.942805 env[1315]: time="2025-09-13T00:24:02.942767027Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:02.943452 env[1315]: time="2025-09-13T00:24:02.943422142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:24:02.945727 env[1315]: time="2025-09-13T00:24:02.945689046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:24:02.946361 env[1315]: time="2025-09-13T00:24:02.946315162Z" level=info msg="CreateContainer within sandbox \"8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:24:02.962261 env[1315]: time="2025-09-13T00:24:02.962211290Z" level=info msg="CreateContainer within sandbox \"8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cf40db10a44a38c7144dac004d4f67bd03373dccec8d6db913243b1af891baf8\"" Sep 13 00:24:02.964427 env[1315]: time="2025-09-13T00:24:02.964396235Z" level=info msg="StartContainer for \"cf40db10a44a38c7144dac004d4f67bd03373dccec8d6db913243b1af891baf8\"" Sep 13 00:24:03.002579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:24:03.002686 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali12175103d0b: link becomes ready Sep 13 00:24:03.003167 systemd-networkd[1096]: cali12175103d0b: Link UP Sep 13 00:24:03.003303 systemd-networkd[1096]: cali12175103d0b: Gained carrier Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.903 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0 coredns-7c65d6cfc9- kube-system c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db 1010 0 2025-09-13 00:23:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-bfvmh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali12175103d0b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfvmh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfvmh-" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.903 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfvmh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.947 [INFO][4247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" HandleID="k8s-pod-network.24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.947 [INFO][4247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" HandleID="k8s-pod-network.24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-bfvmh", "timestamp":"2025-09-13 00:24:02.947199436 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.947 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.947 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.947 [INFO][4247] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.962 [INFO][4247] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" host="localhost" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.968 [INFO][4247] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.973 [INFO][4247] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.974 [INFO][4247] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.976 [INFO][4247] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.977 [INFO][4247] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" host="localhost" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.978 [INFO][4247] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748 Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.983 [INFO][4247] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" host="localhost" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.991 [INFO][4247] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" host="localhost" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.991 [INFO][4247] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" host="localhost" Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.991 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:03.024544 env[1315]: 2025-09-13 00:24:02.991 [INFO][4247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" HandleID="k8s-pod-network.24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:03.025161 env[1315]: 2025-09-13 00:24:02.999 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfvmh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-bfvmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12175103d0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:03.025161 env[1315]: 2025-09-13 00:24:02.999 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfvmh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:03.025161 env[1315]: 2025-09-13 00:24:03.000 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12175103d0b ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfvmh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:03.025161 env[1315]: 2025-09-13 00:24:03.009 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfvmh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:03.025161 env[1315]: 2025-09-13 00:24:03.010 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfvmh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748", Pod:"coredns-7c65d6cfc9-bfvmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12175103d0b", MAC:"fa:b5:5a:b9:fd:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:03.025161 env[1315]: 2025-09-13 00:24:03.021 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfvmh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:03.037000 audit[4296]: NETFILTER_CFG table=filter:115 family=2 entries=44 op=nft_register_chain pid=4296 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:24:03.039424 kernel: kauditd_printk_skb: 17 callbacks suppressed Sep 13 00:24:03.039480 kernel: audit: type=1325 audit(1757723043.037:405): table=filter:115 family=2 entries=44 op=nft_register_chain pid=4296 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:24:03.037000 audit[4296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21532 a0=3 a1=ffffe9093720 a2=0 a3=ffff859ccfa8 items=0 ppid=3417 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:03.045270 kernel: audit: type=1300 audit(1757723043.037:405): arch=c00000b7 syscall=211 success=yes exit=21532 a0=3 a1=ffffe9093720 a2=0 a3=ffff859ccfa8 items=0 ppid=3417 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:03.045347 kernel: audit: type=1327 audit(1757723043.037:405): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:24:03.037000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:24:03.052463 env[1315]: time="2025-09-13T00:24:03.052393826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:03.052463 env[1315]: time="2025-09-13T00:24:03.052437506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:03.052463 env[1315]: time="2025-09-13T00:24:03.052447666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:03.053159 env[1315]: time="2025-09-13T00:24:03.052673664Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748 pid=4310 runtime=io.containerd.runc.v2 Sep 13 00:24:03.053438 env[1315]: time="2025-09-13T00:24:03.053363259Z" level=info msg="StartContainer for \"cf40db10a44a38c7144dac004d4f67bd03373dccec8d6db913243b1af891baf8\" returns successfully" Sep 13 00:24:03.091272 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:24:03.116011 env[1315]: time="2025-09-13T00:24:03.115954552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfvmh,Uid:c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db,Namespace:kube-system,Attempt:1,} returns sandbox id \"24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748\"" Sep 13 00:24:03.116750 kubelet[2096]: E0913 00:24:03.116726 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:03.118997 env[1315]: time="2025-09-13T00:24:03.118924451Z" level=info msg="CreateContainer within sandbox \"24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:24:03.132966 systemd-networkd[1096]: calida4c2c542b7: Link UP Sep 13 00:24:03.145432 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calida4c2c542b7: link becomes ready Sep 13 00:24:03.145598 systemd-networkd[1096]: calida4c2c542b7: Gained carrier Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:02.900 [INFO][4219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0 calico-apiserver-59db8f9d95- calico-apiserver a5f5c1be-6355-4194-9496-65fe9b497b32 1009 0 2025-09-13 00:23:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59db8f9d95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59db8f9d95-nhnkq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calida4c2c542b7 [] [] }} ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-nhnkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:02.901 [INFO][4219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-nhnkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:02.958 [INFO][4246] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" HandleID="k8s-pod-network.ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:02.958 [INFO][4246] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" HandleID="k8s-pod-network.ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034aba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59db8f9d95-nhnkq", "timestamp":"2025-09-13 00:24:02.957764041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:02.958 [INFO][4246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:02.991 [INFO][4246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:02.991 [INFO][4246] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.063 [INFO][4246] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" host="localhost" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.076 [INFO][4246] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.090 [INFO][4246] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.094 [INFO][4246] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.098 [INFO][4246] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.098 [INFO][4246] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" host="localhost" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.100 [INFO][4246] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.105 [INFO][4246] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" host="localhost" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.114 [INFO][4246] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" host="localhost" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.115 [INFO][4246] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" host="localhost" Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.115 [INFO][4246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:03.164447 env[1315]: 2025-09-13 00:24:03.115 [INFO][4246] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" HandleID="k8s-pod-network.ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:03.165046 env[1315]: 2025-09-13 00:24:03.118 [INFO][4219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-nhnkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0", GenerateName:"calico-apiserver-59db8f9d95-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5f5c1be-6355-4194-9496-65fe9b497b32", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59db8f9d95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59db8f9d95-nhnkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida4c2c542b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:03.165046 env[1315]: 2025-09-13 00:24:03.118 [INFO][4219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-nhnkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:03.165046 env[1315]: 2025-09-13 00:24:03.118 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida4c2c542b7 ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-nhnkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:03.165046 env[1315]: 2025-09-13 00:24:03.146 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-nhnkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:03.165046 env[1315]: 2025-09-13 00:24:03.146 [INFO][4219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-nhnkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0", GenerateName:"calico-apiserver-59db8f9d95-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5f5c1be-6355-4194-9496-65fe9b497b32", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59db8f9d95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba", Pod:"calico-apiserver-59db8f9d95-nhnkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida4c2c542b7", MAC:"2e:ca:20:a3:de:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:03.165046 env[1315]: 2025-09-13 00:24:03.157 [INFO][4219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba" Namespace="calico-apiserver" Pod="calico-apiserver-59db8f9d95-nhnkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:03.165046 env[1315]: time="2025-09-13T00:24:03.160310009Z" level=info msg="CreateContainer within sandbox \"24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48fc10fb305e2e959ef797b6bf446245ede06f87e102fbf9533ddd39c8fd0030\"" Sep 13 00:24:03.168124 env[1315]: time="2025-09-13T00:24:03.168075796Z" level=info msg="StartContainer for \"48fc10fb305e2e959ef797b6bf446245ede06f87e102fbf9533ddd39c8fd0030\"" Sep 13 00:24:03.206041 env[1315]: time="2025-09-13T00:24:03.204683826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:03.206041 env[1315]: time="2025-09-13T00:24:03.204739145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:03.206041 env[1315]: time="2025-09-13T00:24:03.204748825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:03.206041 env[1315]: time="2025-09-13T00:24:03.205015503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba pid=4390 runtime=io.containerd.runc.v2 Sep 13 00:24:03.234000 audit[4421]: NETFILTER_CFG table=filter:116 family=2 entries=59 op=nft_register_chain pid=4421 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:24:03.234000 audit[4421]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29492 a0=3 a1=ffffcfbff040 a2=0 a3=ffff95feafa8 items=0 ppid=3417 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:03.243605 kernel: audit: type=1325 audit(1757723043.234:406): table=filter:116 family=2 entries=59 op=nft_register_chain pid=4421 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:24:03.243687 kernel: audit: type=1300 audit(1757723043.234:406): arch=c00000b7 syscall=211 success=yes exit=29492 a0=3 a1=ffffcfbff040 a2=0 a3=ffff95feafa8 items=0 ppid=3417 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:03.234000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:24:03.245804 kernel: audit: type=1327 audit(1757723043.234:406): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:24:03.268447 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:24:03.301681 env[1315]: time="2025-09-13T00:24:03.301631563Z" level=info msg="StartContainer for \"48fc10fb305e2e959ef797b6bf446245ede06f87e102fbf9533ddd39c8fd0030\" returns successfully" Sep 13 00:24:03.302707 env[1315]: time="2025-09-13T00:24:03.302675996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59db8f9d95-nhnkq,Uid:a5f5c1be-6355-4194-9496-65fe9b497b32,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba\"" Sep 13 00:24:03.308415 env[1315]: time="2025-09-13T00:24:03.308220678Z" level=info msg="CreateContainer within sandbox \"ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:24:03.360961 env[1315]: time="2025-09-13T00:24:03.360879398Z" level=info msg="CreateContainer within sandbox \"ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f6d7a54555168d0fb21c197c3b1cfe1557b937bfe32f8c19b0588f5b1ed31e72\"" Sep 13 00:24:03.361686 env[1315]: time="2025-09-13T00:24:03.361603913Z" level=info msg="StartContainer for \"f6d7a54555168d0fb21c197c3b1cfe1557b937bfe32f8c19b0588f5b1ed31e72\"" Sep 13 00:24:03.368833 systemd-networkd[1096]: cali4447082197a: Gained IPv6LL Sep 13 00:24:03.424145 env[1315]: time="2025-09-13T00:24:03.423657130Z" level=info msg="StartContainer for \"f6d7a54555168d0fb21c197c3b1cfe1557b937bfe32f8c19b0588f5b1ed31e72\" returns successfully" Sep 13 00:24:03.567699 env[1315]: time="2025-09-13T00:24:03.567593586Z" level=info msg="StopPodSandbox for \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\"" Sep 13 00:24:03.567812 env[1315]: time="2025-09-13T00:24:03.567750545Z" level=info msg="StopPodSandbox for \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\"" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.654 [INFO][4519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.654 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" iface="eth0" netns="/var/run/netns/cni-87c970bd-fa59-9bdb-737f-51d94a9f3929" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.654 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" iface="eth0" netns="/var/run/netns/cni-87c970bd-fa59-9bdb-737f-51d94a9f3929" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.658 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" iface="eth0" netns="/var/run/netns/cni-87c970bd-fa59-9bdb-737f-51d94a9f3929" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.658 [INFO][4519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.658 [INFO][4519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.715 [INFO][4537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" HandleID="k8s-pod-network.8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.715 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.715 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.730 [WARNING][4537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" HandleID="k8s-pod-network.8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.730 [INFO][4537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" HandleID="k8s-pod-network.8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.732 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:03.737459 env[1315]: 2025-09-13 00:24:03.733 [INFO][4519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:03.740111 env[1315]: time="2025-09-13T00:24:03.740069008Z" level=info msg="TearDown network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\" successfully" Sep 13 00:24:03.740111 env[1315]: time="2025-09-13T00:24:03.740108208Z" level=info msg="StopPodSandbox for \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\" returns successfully" Sep 13 00:24:03.740891 env[1315]: time="2025-09-13T00:24:03.740857562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589698f46b-2w2b2,Uid:734b3a8a-120f-488b-a1eb-812d2e9a1288,Namespace:calico-system,Attempt:1,}" Sep 13 00:24:03.742197 systemd[1]: run-netns-cni\x2d87c970bd\x2dfa59\x2d9bdb\x2d737f\x2d51d94a9f3929.mount: Deactivated successfully. Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.642 [INFO][4514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.642 [INFO][4514] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" iface="eth0" netns="/var/run/netns/cni-ee79fa6b-b0fc-7989-39a3-251b9264e9c4" Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.643 [INFO][4514] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" iface="eth0" netns="/var/run/netns/cni-ee79fa6b-b0fc-7989-39a3-251b9264e9c4" Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.643 [INFO][4514] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" iface="eth0" netns="/var/run/netns/cni-ee79fa6b-b0fc-7989-39a3-251b9264e9c4" Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.643 [INFO][4514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.643 [INFO][4514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.730 [INFO][4531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" HandleID="k8s-pod-network.4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.731 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.732 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.747 [WARNING][4531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" HandleID="k8s-pod-network.4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.747 [INFO][4531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" HandleID="k8s-pod-network.4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.758 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:03.763513 env[1315]: 2025-09-13 00:24:03.760 [INFO][4514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:03.766180 systemd[1]: run-netns-cni\x2dee79fa6b\x2db0fc\x2d7989\x2d39a3\x2d251b9264e9c4.mount: Deactivated successfully. Sep 13 00:24:03.767097 env[1315]: time="2025-09-13T00:24:03.767029584Z" level=info msg="TearDown network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\" successfully" Sep 13 00:24:03.767097 env[1315]: time="2025-09-13T00:24:03.767093263Z" level=info msg="StopPodSandbox for \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\" returns successfully" Sep 13 00:24:03.767774 env[1315]: time="2025-09-13T00:24:03.767746059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xhmc,Uid:cdf08d6b-aedb-443c-a2b0-45b46a85e022,Namespace:calico-system,Attempt:1,}" Sep 13 00:24:03.806430 kubelet[2096]: E0913 00:24:03.805315 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:03.860517 kubelet[2096]: I0913 00:24:03.852896 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59db8f9d95-nhnkq" podStartSLOduration=26.852878757 podStartE2EDuration="26.852878757s" podCreationTimestamp="2025-09-13 00:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:03.852844597 +0000 UTC m=+41.420244936" watchObservedRunningTime="2025-09-13 00:24:03.852878757 +0000 UTC m=+41.420279096" Sep 13 00:24:03.860517 kubelet[2096]: I0913 00:24:03.853238 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bfvmh" podStartSLOduration=35.853229995 podStartE2EDuration="35.853229995s" podCreationTimestamp="2025-09-13 00:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:03.832199858 +0000 UTC m=+41.399600197" watchObservedRunningTime="2025-09-13 00:24:03.853229995 +0000 UTC m=+41.420630334" Sep 13 00:24:03.892000 audit[4572]: NETFILTER_CFG table=filter:117 family=2 entries=12 op=nft_register_rule pid=4572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:03.892000 audit[4572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff5491b80 a2=0 a3=1 items=0 ppid=2207 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:03.899637 kernel: audit: type=1325 audit(1757723043.892:407): table=filter:117 family=2 entries=12 op=nft_register_rule pid=4572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:03.899723 kernel: audit: type=1300 audit(1757723043.892:407): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff5491b80 a2=0 a3=1 items=0 ppid=2207 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:03.899747 kernel: audit: type=1327 audit(1757723043.892:407): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:03.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:03.907918 kubelet[2096]: I0913 00:24:03.907795 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59db8f9d95-s27wv" podStartSLOduration=24.883144052 podStartE2EDuration="26.907777422s" podCreationTimestamp="2025-09-13 00:23:37 +0000 UTC" firstStartedPulling="2025-09-13 00:24:00.920132763 +0000 UTC m=+38.487533102" lastFinishedPulling="2025-09-13 00:24:02.944766133 +0000 UTC m=+40.512166472" observedRunningTime="2025-09-13 00:24:03.907423064 +0000 UTC m=+41.474823403" watchObservedRunningTime="2025-09-13 00:24:03.907777422 +0000 UTC m=+41.475177761" Sep 13 00:24:03.916000 audit[4572]: NETFILTER_CFG table=nat:118 family=2 entries=46 op=nft_register_rule pid=4572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:03.925424 kernel: audit: type=1325 audit(1757723043.916:408): table=nat:118 family=2 entries=46 op=nft_register_rule pid=4572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:03.916000 audit[4572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=fffff5491b80 a2=0 a3=1 items=0 ppid=2207 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:03.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:03.964000 audit[4578]: NETFILTER_CFG table=filter:119 family=2 entries=12 op=nft_register_rule pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:03.964000 audit[4578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe5ffb520 a2=0 a3=1 items=0 ppid=2207 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:03.964000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:04.016000 audit[4578]: NETFILTER_CFG table=nat:120 family=2 entries=58 op=nft_register_chain pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:04.016000 audit[4578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20628 a0=3 a1=ffffe5ffb520 a2=0 a3=1 items=0 ppid=2207 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:04.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:04.207365 systemd-networkd[1096]: calie891b90763b: Link UP Sep 13 00:24:04.209563 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:24:04.209646 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie891b90763b: link becomes ready Sep 13 00:24:04.209840 systemd-networkd[1096]: calie891b90763b: Gained carrier Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:03.896 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0 calico-kube-controllers-589698f46b- calico-system 734b3a8a-120f-488b-a1eb-812d2e9a1288 1033 0 2025-09-13 00:23:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:589698f46b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-589698f46b-2w2b2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie891b90763b [] [] }} ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Namespace="calico-system" Pod="calico-kube-controllers-589698f46b-2w2b2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:03.896 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Namespace="calico-system" Pod="calico-kube-controllers-589698f46b-2w2b2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.099 [INFO][4579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" HandleID="k8s-pod-network.dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.099 [INFO][4579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" HandleID="k8s-pod-network.dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b5070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-589698f46b-2w2b2", "timestamp":"2025-09-13 00:24:04.099368132 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.099 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.099 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.099 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.126 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" host="localhost" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.135 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.155 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.167 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.170 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.170 [INFO][4579] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" host="localhost" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.172 [INFO][4579] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6 Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.178 [INFO][4579] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" host="localhost" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.188 [INFO][4579] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" host="localhost" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.188 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" host="localhost" Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.189 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:04.239650 env[1315]: 2025-09-13 00:24:04.189 [INFO][4579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" HandleID="k8s-pod-network.dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:04.240540 env[1315]: 2025-09-13 00:24:04.198 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Namespace="calico-system" Pod="calico-kube-controllers-589698f46b-2w2b2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0", GenerateName:"calico-kube-controllers-589698f46b-", Namespace:"calico-system", SelfLink:"", UID:"734b3a8a-120f-488b-a1eb-812d2e9a1288", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589698f46b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-589698f46b-2w2b2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie891b90763b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:04.240540 env[1315]: 2025-09-13 00:24:04.198 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Namespace="calico-system" Pod="calico-kube-controllers-589698f46b-2w2b2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:04.240540 env[1315]: 2025-09-13 00:24:04.198 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie891b90763b ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Namespace="calico-system" Pod="calico-kube-controllers-589698f46b-2w2b2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:04.240540 env[1315]: 2025-09-13 00:24:04.212 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Namespace="calico-system" Pod="calico-kube-controllers-589698f46b-2w2b2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:04.240540 env[1315]: 2025-09-13 00:24:04.213 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Namespace="calico-system" Pod="calico-kube-controllers-589698f46b-2w2b2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0", GenerateName:"calico-kube-controllers-589698f46b-", Namespace:"calico-system", SelfLink:"", UID:"734b3a8a-120f-488b-a1eb-812d2e9a1288", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589698f46b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6", Pod:"calico-kube-controllers-589698f46b-2w2b2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie891b90763b", MAC:"ba:05:ab:3d:1f:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:04.240540 env[1315]: 2025-09-13 00:24:04.227 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6" Namespace="calico-system" Pod="calico-kube-controllers-589698f46b-2w2b2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:04.277463 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicd6d3a3941b: link becomes ready Sep 13 00:24:04.276014 systemd-networkd[1096]: calicd6d3a3941b: Link UP Sep 13 00:24:04.276255 systemd-networkd[1096]: calicd6d3a3941b: Gained carrier Sep 13 00:24:04.280000 audit[4611]: NETFILTER_CFG table=filter:121 family=2 entries=52 op=nft_register_chain pid=4611 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:24:04.280000 audit[4611]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffec4fb310 a2=0 a3=ffffa4343fa8 items=0 ppid=3417 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:04.280000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:24:04.284273 env[1315]: time="2025-09-13T00:24:04.284190546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:04.284440 env[1315]: time="2025-09-13T00:24:04.284244985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:04.284440 env[1315]: time="2025-09-13T00:24:04.284256985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:04.284611 env[1315]: time="2025-09-13T00:24:04.284569623Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6 pid=4616 runtime=io.containerd.runc.v2 Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.027 [INFO][4559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7xhmc-eth0 csi-node-driver- calico-system cdf08d6b-aedb-443c-a2b0-45b46a85e022 1032 0 2025-09-13 00:23:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7xhmc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicd6d3a3941b [] [] }} ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Namespace="calico-system" Pod="csi-node-driver-7xhmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xhmc-" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.027 [INFO][4559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Namespace="calico-system" Pod="csi-node-driver-7xhmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.108 [INFO][4587] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" HandleID="k8s-pod-network.ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.109 [INFO][4587] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" HandleID="k8s-pod-network.ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7xhmc", "timestamp":"2025-09-13 00:24:04.108969389 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.109 [INFO][4587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.188 [INFO][4587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.189 [INFO][4587] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.228 [INFO][4587] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" host="localhost" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.241 [INFO][4587] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.248 [INFO][4587] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.250 [INFO][4587] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.253 [INFO][4587] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.253 [INFO][4587] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" host="localhost" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.254 [INFO][4587] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.259 [INFO][4587] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" host="localhost" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.267 [INFO][4587] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" host="localhost" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.267 [INFO][4587] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" host="localhost" Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.267 [INFO][4587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:04.305177 env[1315]: 2025-09-13 00:24:04.267 [INFO][4587] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" HandleID="k8s-pod-network.ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:04.305796 env[1315]: 2025-09-13 00:24:04.270 [INFO][4559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Namespace="calico-system" Pod="csi-node-driver-7xhmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xhmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xhmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdf08d6b-aedb-443c-a2b0-45b46a85e022", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7xhmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd6d3a3941b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:04.305796 env[1315]: 2025-09-13 00:24:04.270 [INFO][4559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Namespace="calico-system" Pod="csi-node-driver-7xhmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:04.305796 env[1315]: 2025-09-13 00:24:04.270 [INFO][4559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd6d3a3941b ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Namespace="calico-system" Pod="csi-node-driver-7xhmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:04.305796 env[1315]: 2025-09-13 00:24:04.276 [INFO][4559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Namespace="calico-system" Pod="csi-node-driver-7xhmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:04.305796 env[1315]: 2025-09-13 00:24:04.277 [INFO][4559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Namespace="calico-system" Pod="csi-node-driver-7xhmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xhmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xhmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdf08d6b-aedb-443c-a2b0-45b46a85e022", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c", Pod:"csi-node-driver-7xhmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd6d3a3941b", MAC:"ce:b1:69:f4:30:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:04.305796 env[1315]: 2025-09-13 00:24:04.296 [INFO][4559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c" Namespace="calico-system" Pod="csi-node-driver-7xhmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:04.367093 env[1315]: time="2025-09-13T00:24:04.366976676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:04.367093 env[1315]: time="2025-09-13T00:24:04.367026436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:04.367093 env[1315]: time="2025-09-13T00:24:04.367038956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:04.369426 env[1315]: time="2025-09-13T00:24:04.367402953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c pid=4647 runtime=io.containerd.runc.v2 Sep 13 00:24:04.396000 audit[4665]: NETFILTER_CFG table=filter:122 family=2 entries=56 op=nft_register_chain pid=4665 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:24:04.396000 audit[4665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25500 a0=3 a1=fffff4e5f8b0 a2=0 a3=ffff93caefa8 items=0 ppid=3417 pid=4665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:04.396000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:24:04.424421 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:24:04.451554 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:24:04.490159 env[1315]: time="2025-09-13T00:24:04.484646735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589698f46b-2w2b2,Uid:734b3a8a-120f-488b-a1eb-812d2e9a1288,Namespace:calico-system,Attempt:1,} returns sandbox id \"dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6\"" Sep 13 00:24:04.497502 env[1315]: time="2025-09-13T00:24:04.497458650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xhmc,Uid:cdf08d6b-aedb-443c-a2b0-45b46a85e022,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c\"" Sep 13 00:24:04.819184 kubelet[2096]: I0913 00:24:04.819147 2096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:04.819892 kubelet[2096]: E0913 00:24:04.819859 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:04.840888 systemd-networkd[1096]: cali12175103d0b: Gained IPv6LL Sep 13 00:24:04.941368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210615486.mount: Deactivated successfully. Sep 13 00:24:04.968508 systemd-networkd[1096]: calida4c2c542b7: Gained IPv6LL Sep 13 00:24:05.384000 audit[4701]: NETFILTER_CFG table=filter:123 family=2 entries=11 op=nft_register_rule pid=4701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:05.384000 audit[4701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffd0dac2c0 a2=0 a3=1 items=0 ppid=2207 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:05.384000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:05.391000 audit[4701]: NETFILTER_CFG table=nat:124 family=2 entries=29 op=nft_register_chain pid=4701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:05.391000 audit[4701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffd0dac2c0 a2=0 a3=1 items=0 ppid=2207 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:05.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:05.417922 systemd-networkd[1096]: calicd6d3a3941b: Gained IPv6LL Sep 13 00:24:05.650948 env[1315]: time="2025-09-13T00:24:05.650829233Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:05.652856 env[1315]: time="2025-09-13T00:24:05.652828300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:05.654683 env[1315]: time="2025-09-13T00:24:05.654655328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:05.656749 env[1315]: time="2025-09-13T00:24:05.656708435Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:05.657417 env[1315]: time="2025-09-13T00:24:05.657374631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:24:05.661493 env[1315]: time="2025-09-13T00:24:05.661445684Z" level=info msg="CreateContainer within sandbox \"09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:24:05.661716 env[1315]: time="2025-09-13T00:24:05.661691963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:24:05.682568 env[1315]: time="2025-09-13T00:24:05.682512708Z" level=info msg="CreateContainer within sandbox \"09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d9e94c5d55cf677d258cc8ab2bbcb8011d9d929434e870065dbb863fcc3993c9\"" Sep 13 00:24:05.684451 env[1315]: time="2025-09-13T00:24:05.684419856Z" level=info msg="StartContainer for \"d9e94c5d55cf677d258cc8ab2bbcb8011d9d929434e870065dbb863fcc3993c9\"" Sep 13 00:24:05.714031 systemd[1]: run-containerd-runc-k8s.io-d9e94c5d55cf677d258cc8ab2bbcb8011d9d929434e870065dbb863fcc3993c9-runc.OsNHVu.mount: Deactivated successfully. Sep 13 00:24:05.771291 env[1315]: time="2025-09-13T00:24:05.771244096Z" level=info msg="StartContainer for \"d9e94c5d55cf677d258cc8ab2bbcb8011d9d929434e870065dbb863fcc3993c9\" returns successfully" Sep 13 00:24:05.823710 kubelet[2096]: E0913 00:24:05.823675 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:05.840550 kubelet[2096]: I0913 00:24:05.840479 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-r22kc" podStartSLOduration=21.257423437 podStartE2EDuration="24.840454289s" podCreationTimestamp="2025-09-13 00:23:41 +0000 UTC" firstStartedPulling="2025-09-13 00:24:02.07576969 +0000 UTC m=+39.643170029" lastFinishedPulling="2025-09-13 00:24:05.658800542 +0000 UTC m=+43.226200881" observedRunningTime="2025-09-13 00:24:05.840168771 +0000 UTC m=+43.407569110" watchObservedRunningTime="2025-09-13 00:24:05.840454289 +0000 UTC m=+43.407854628" Sep 13 00:24:06.056921 systemd-networkd[1096]: calie891b90763b: Gained IPv6LL Sep 13 00:24:06.409000 audit[4738]: NETFILTER_CFG table=filter:125 family=2 entries=10 op=nft_register_rule pid=4738 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:06.409000 audit[4738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffc09f2ba0 a2=0 a3=1 items=0 ppid=2207 pid=4738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:06.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:06.423000 audit[4738]: NETFILTER_CFG table=nat:126 family=2 entries=24 op=nft_register_rule pid=4738 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:06.423000 audit[4738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffc09f2ba0 a2=0 a3=1 items=0 ppid=2207 pid=4738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:06.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:06.824792 kubelet[2096]: I0913 00:24:06.824753 2096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:07.570218 env[1315]: time="2025-09-13T00:24:07.570172402Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:07.572717 env[1315]: time="2025-09-13T00:24:07.572686107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:07.574778 env[1315]: time="2025-09-13T00:24:07.574748734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:07.576527 env[1315]: time="2025-09-13T00:24:07.576488443Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:07.577191 env[1315]: time="2025-09-13T00:24:07.577165759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:24:07.578947 env[1315]: time="2025-09-13T00:24:07.578914069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:24:07.603235 env[1315]: time="2025-09-13T00:24:07.603174800Z" level=info msg="CreateContainer within sandbox \"dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:24:07.621571 env[1315]: time="2025-09-13T00:24:07.621520688Z" level=info msg="CreateContainer within sandbox \"dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"19eccb908018046d0d3abbb3a67e17b7de95e454165ccf6c29d6029eb188737a\"" Sep 13 00:24:07.623663 env[1315]: time="2025-09-13T00:24:07.623633035Z" level=info msg="StartContainer for \"19eccb908018046d0d3abbb3a67e17b7de95e454165ccf6c29d6029eb188737a\"" Sep 13 00:24:07.780762 env[1315]: time="2025-09-13T00:24:07.780707713Z" level=info msg="StartContainer for \"19eccb908018046d0d3abbb3a67e17b7de95e454165ccf6c29d6029eb188737a\" returns successfully" Sep 13 00:24:07.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.117:22-10.0.0.1:55216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:07.792319 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:55216.service. Sep 13 00:24:07.829849 kubelet[2096]: I0913 00:24:07.829069 2096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:07.852235 kubelet[2096]: I0913 00:24:07.851511 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-589698f46b-2w2b2" podStartSLOduration=23.759024934 podStartE2EDuration="26.851494959s" podCreationTimestamp="2025-09-13 00:23:41 +0000 UTC" firstStartedPulling="2025-09-13 00:24:04.485740368 +0000 UTC m=+42.053140707" lastFinishedPulling="2025-09-13 00:24:07.578210393 +0000 UTC m=+45.145610732" observedRunningTime="2025-09-13 00:24:07.850765484 +0000 UTC m=+45.418165823" watchObservedRunningTime="2025-09-13 00:24:07.851494959 +0000 UTC m=+45.418895298" Sep 13 00:24:07.863000 audit[4805]: USER_ACCT pid=4805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:07.865219 sshd[4805]: Accepted publickey for core from 10.0.0.1 port 55216 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:07.867000 audit[4805]: CRED_ACQ pid=4805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:07.867000 audit[4805]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdb9619d0 a2=3 a3=1 items=0 ppid=1 pid=4805 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:07.867000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:07.869779 sshd[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:07.879965 systemd-logind[1302]: New session 8 of user core. Sep 13 00:24:07.880350 systemd[1]: Started session-8.scope. Sep 13 00:24:07.900000 audit[4805]: USER_START pid=4805 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:07.903000 audit[4826]: CRED_ACQ pid=4826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:08.294688 sshd[4805]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:08.294000 audit[4805]: USER_END pid=4805 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:08.299245 kernel: kauditd_printk_skb: 34 callbacks suppressed Sep 13 00:24:08.299348 kernel: audit: type=1106 audit(1757723048.294:423): pid=4805 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:08.299377 kernel: audit: type=1104 audit(1757723048.295:424): pid=4805 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:08.295000 audit[4805]: CRED_DISP pid=4805 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:08.297925 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:55216.service: Deactivated successfully. Sep 13 00:24:08.298796 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:24:08.301765 systemd-logind[1302]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:24:08.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.117:22-10.0.0.1:55216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:08.304437 kernel: audit: type=1131 audit(1757723048.296:425): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.117:22-10.0.0.1:55216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:08.304679 systemd-logind[1302]: Removed session 8. Sep 13 00:24:08.599236 env[1315]: time="2025-09-13T00:24:08.599129390Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:08.600701 env[1315]: time="2025-09-13T00:24:08.600667221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:08.602928 env[1315]: time="2025-09-13T00:24:08.602889888Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:08.604553 env[1315]: time="2025-09-13T00:24:08.604522958Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:08.605029 env[1315]: time="2025-09-13T00:24:08.605001915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:24:08.610173 env[1315]: time="2025-09-13T00:24:08.610124445Z" level=info msg="CreateContainer within sandbox \"ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:24:08.633227 env[1315]: time="2025-09-13T00:24:08.633166907Z" level=info msg="CreateContainer within sandbox \"ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fca55b5bb6348b1f955988ac8b887829d1cf72cf51f74d7f5b0dd78bcf6d6741\"" Sep 13 00:24:08.633697 env[1315]: time="2025-09-13T00:24:08.633654144Z" level=info msg="StartContainer for \"fca55b5bb6348b1f955988ac8b887829d1cf72cf51f74d7f5b0dd78bcf6d6741\"" Sep 13 00:24:08.686722 env[1315]: time="2025-09-13T00:24:08.686670987Z" level=info msg="StartContainer for \"fca55b5bb6348b1f955988ac8b887829d1cf72cf51f74d7f5b0dd78bcf6d6741\" returns successfully" Sep 13 00:24:08.687898 env[1315]: time="2025-09-13T00:24:08.687838540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:24:09.866948 env[1315]: time="2025-09-13T00:24:09.866901098Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:09.869260 env[1315]: time="2025-09-13T00:24:09.869231645Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:09.871233 env[1315]: time="2025-09-13T00:24:09.871197233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:09.873629 env[1315]: time="2025-09-13T00:24:09.873598699Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:24:09.874138 env[1315]: time="2025-09-13T00:24:09.874107976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:24:09.876595 env[1315]: time="2025-09-13T00:24:09.876563682Z" level=info msg="CreateContainer within sandbox \"ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:24:09.892745 env[1315]: time="2025-09-13T00:24:09.892704268Z" level=info msg="CreateContainer within sandbox \"ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e67fc4bf9253e8221034c866fdd0de0b6578dbd436ed799cbbe2c4a1c49511ea\"" Sep 13 00:24:09.893445 env[1315]: time="2025-09-13T00:24:09.893416784Z" level=info msg="StartContainer for \"e67fc4bf9253e8221034c866fdd0de0b6578dbd436ed799cbbe2c4a1c49511ea\"" Sep 13 00:24:09.989394 env[1315]: time="2025-09-13T00:24:09.989331904Z" level=info msg="StartContainer for \"e67fc4bf9253e8221034c866fdd0de0b6578dbd436ed799cbbe2c4a1c49511ea\" returns successfully" Sep 13 00:24:10.644445 kubelet[2096]: I0913 00:24:10.644377 2096 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:24:10.644445 kubelet[2096]: I0913 00:24:10.644445 2096 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:24:10.851154 kubelet[2096]: I0913 00:24:10.851097 2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7xhmc" podStartSLOduration=25.474495063 podStartE2EDuration="30.85107111s" podCreationTimestamp="2025-09-13 00:23:40 +0000 UTC" firstStartedPulling="2025-09-13 00:24:04.498419204 +0000 UTC m=+42.065819543" lastFinishedPulling="2025-09-13 00:24:09.874995251 +0000 UTC m=+47.442395590" observedRunningTime="2025-09-13 00:24:10.850694152 +0000 UTC m=+48.418094491" watchObservedRunningTime="2025-09-13 00:24:10.85107111 +0000 UTC m=+48.418471449" Sep 13 00:24:13.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.117:22-10.0.0.1:47214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:13.297970 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:47214.service. Sep 13 00:24:13.303411 kernel: audit: type=1130 audit(1757723053.296:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.117:22-10.0.0.1:47214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:13.341000 audit[4936]: USER_ACCT pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.343080 sshd[4936]: Accepted publickey for core from 10.0.0.1 port 47214 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:13.345022 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:13.343000 audit[4936]: CRED_ACQ pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.348671 kernel: audit: type=1101 audit(1757723053.341:427): pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.348747 kernel: audit: type=1103 audit(1757723053.343:428): pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.350754 kernel: audit: type=1006 audit(1757723053.343:429): pid=4936 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 13 00:24:13.343000 audit[4936]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcef07960 a2=3 a3=1 items=0 ppid=1 pid=4936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:13.355082 kernel: audit: type=1300 audit(1757723053.343:429): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcef07960 a2=3 a3=1 items=0 ppid=1 pid=4936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:13.343000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:13.356256 kernel: audit: type=1327 audit(1757723053.343:429): proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:13.357807 systemd-logind[1302]: New session 9 of user core. Sep 13 00:24:13.358603 systemd[1]: Started session-9.scope. Sep 13 00:24:13.362000 audit[4936]: USER_START pid=4936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.368000 audit[4939]: CRED_ACQ pid=4939 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.372776 kernel: audit: type=1105 audit(1757723053.362:430): pid=4936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.372868 kernel: audit: type=1103 audit(1757723053.368:431): pid=4939 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.571160 sshd[4936]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:13.571000 audit[4936]: USER_END pid=4936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.574898 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:47214.service: Deactivated successfully. Sep 13 00:24:13.575934 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:24:13.571000 audit[4936]: CRED_DISP pid=4936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.577570 systemd-logind[1302]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:24:13.579832 kernel: audit: type=1106 audit(1757723053.571:432): pid=4936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.579913 kernel: audit: type=1104 audit(1757723053.571:433): pid=4936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:13.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.117:22-10.0.0.1:47214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:13.581521 systemd-logind[1302]: Removed session 9. Sep 13 00:24:18.574623 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:47228.service. Sep 13 00:24:18.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.117:22-10.0.0.1:47228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:18.575774 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:24:18.575852 kernel: audit: type=1130 audit(1757723058.573:435): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.117:22-10.0.0.1:47228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:18.615000 audit[4959]: USER_ACCT pid=4959 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.617138 sshd[4959]: Accepted publickey for core from 10.0.0.1 port 47228 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:18.618867 sshd[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:18.617000 audit[4959]: CRED_ACQ pid=4959 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.622110 kernel: audit: type=1101 audit(1757723058.615:436): pid=4959 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.622189 kernel: audit: type=1103 audit(1757723058.617:437): pid=4959 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.622218 kernel: audit: type=1006 audit(1757723058.617:438): pid=4959 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 13 00:24:18.617000 audit[4959]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed461040 a2=3 a3=1 items=0 ppid=1 pid=4959 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:18.627018 kernel: audit: type=1300 audit(1757723058.617:438): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed461040 a2=3 a3=1 items=0 ppid=1 pid=4959 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:18.627087 kernel: audit: type=1327 audit(1757723058.617:438): proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:18.617000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:18.628642 systemd-logind[1302]: New session 10 of user core. Sep 13 00:24:18.629491 systemd[1]: Started session-10.scope. Sep 13 00:24:18.632000 audit[4959]: USER_START pid=4959 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.633000 audit[4962]: CRED_ACQ pid=4962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.639075 kernel: audit: type=1105 audit(1757723058.632:439): pid=4959 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.639149 kernel: audit: type=1103 audit(1757723058.633:440): pid=4962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.792163 sshd[4959]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:18.792000 audit[4959]: USER_END pid=4959 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.794635 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:47234.service. Sep 13 00:24:18.792000 audit[4959]: CRED_DISP pid=4959 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.799793 kernel: audit: type=1106 audit(1757723058.792:441): pid=4959 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.799882 kernel: audit: type=1104 audit(1757723058.792:442): pid=4959 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.117:22-10.0.0.1:47234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:18.805215 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:47228.service: Deactivated successfully. Sep 13 00:24:18.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.117:22-10.0.0.1:47228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:18.806317 systemd-logind[1302]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:24:18.806372 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:24:18.807870 systemd-logind[1302]: Removed session 10. Sep 13 00:24:18.834000 audit[4973]: USER_ACCT pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.836314 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 47234 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:18.835000 audit[4973]: CRED_ACQ pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.835000 audit[4973]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdab6d620 a2=3 a3=1 items=0 ppid=1 pid=4973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:18.835000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:18.837552 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:18.841461 systemd-logind[1302]: New session 11 of user core. Sep 13 00:24:18.842161 systemd[1]: Started session-11.scope. Sep 13 00:24:18.844000 audit[4973]: USER_START pid=4973 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.846000 audit[4978]: CRED_ACQ pid=4978 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:18.983548 kubelet[2096]: I0913 00:24:18.983500 2096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:19.026000 audit[4986]: NETFILTER_CFG table=filter:127 family=2 entries=10 op=nft_register_rule pid=4986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:19.026000 audit[4986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffc93f58f0 a2=0 a3=1 items=0 ppid=2207 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:19.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:19.032171 sshd[4973]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:19.032000 audit[4973]: USER_END pid=4973 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:19.032000 audit[4973]: CRED_DISP pid=4973 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:19.035051 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:47248.service. Sep 13 00:24:19.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.117:22-10.0.0.1:47248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:19.036016 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:47234.service: Deactivated successfully. Sep 13 00:24:19.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.117:22-10.0.0.1:47234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:19.037607 systemd-logind[1302]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:24:19.037717 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:24:19.035000 audit[4986]: NETFILTER_CFG table=nat:128 family=2 entries=36 op=nft_register_chain pid=4986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:19.035000 audit[4986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=ffffc93f58f0 a2=0 a3=1 items=0 ppid=2207 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:19.035000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:19.039701 systemd-logind[1302]: Removed session 11. Sep 13 00:24:19.073000 audit[4987]: USER_ACCT pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:19.074712 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 47248 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:19.074000 audit[4987]: CRED_ACQ pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:19.074000 audit[4987]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff85451a0 a2=3 a3=1 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:19.074000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:19.076458 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:19.079927 systemd-logind[1302]: New session 12 of user core. Sep 13 00:24:19.080824 systemd[1]: Started session-12.scope. Sep 13 00:24:19.083000 audit[4987]: USER_START pid=4987 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:19.084000 audit[4992]: CRED_ACQ pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:19.280469 sshd[4987]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:19.280000 audit[4987]: USER_END pid=4987 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:19.280000 audit[4987]: CRED_DISP pid=4987 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:19.283307 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:47248.service: Deactivated successfully. Sep 13 00:24:19.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.117:22-10.0.0.1:47248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:19.284286 systemd-logind[1302]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:24:19.284324 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:24:19.285186 systemd-logind[1302]: Removed session 12. Sep 13 00:24:22.535597 env[1315]: time="2025-09-13T00:24:22.535534704Z" level=info msg="StopPodSandbox for \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\"" Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.633 [WARNING][5014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748", Pod:"coredns-7c65d6cfc9-bfvmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12175103d0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.634 [INFO][5014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.635 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" iface="eth0" netns="" Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.635 [INFO][5014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.635 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.685 [INFO][5028] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" HandleID="k8s-pod-network.69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.685 [INFO][5028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.685 [INFO][5028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.698 [WARNING][5028] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" HandleID="k8s-pod-network.69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.698 [INFO][5028] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" HandleID="k8s-pod-network.69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.699 [INFO][5028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:22.712317 env[1315]: 2025-09-13 00:24:22.710 [INFO][5014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:22.713115 env[1315]: time="2025-09-13T00:24:22.713071202Z" level=info msg="TearDown network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\" successfully" Sep 13 00:24:22.713320 env[1315]: time="2025-09-13T00:24:22.713268721Z" level=info msg="StopPodSandbox for \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\" returns successfully" Sep 13 00:24:22.715714 env[1315]: time="2025-09-13T00:24:22.715671430Z" level=info msg="RemovePodSandbox for \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\"" Sep 13 00:24:22.716064 env[1315]: time="2025-09-13T00:24:22.716010988Z" level=info msg="Forcibly stopping sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\"" Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.771 [WARNING][5047] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c178a51e-d9f0-4ef7-b1ba-7ddfa066b8db", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24cf15a4be112de34695734b30eeafa9f3713af56c94385636bb2866b60f6748", Pod:"coredns-7c65d6cfc9-bfvmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12175103d0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.772 [INFO][5047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.772 [INFO][5047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" iface="eth0" netns="" Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.772 [INFO][5047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.772 [INFO][5047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.796 [INFO][5058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" HandleID="k8s-pod-network.69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.796 [INFO][5058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.797 [INFO][5058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.805 [WARNING][5058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" HandleID="k8s-pod-network.69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.805 [INFO][5058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" HandleID="k8s-pod-network.69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Workload="localhost-k8s-coredns--7c65d6cfc9--bfvmh-eth0" Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.807 [INFO][5058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:22.813604 env[1315]: 2025-09-13 00:24:22.812 [INFO][5047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76" Sep 13 00:24:22.814219 env[1315]: time="2025-09-13T00:24:22.814170933Z" level=info msg="TearDown network for sandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\" successfully" Sep 13 00:24:22.820480 env[1315]: time="2025-09-13T00:24:22.820442064Z" level=info msg="RemovePodSandbox \"69a87cc589b09b17df5d43f0f34bd4715183582a749f907bb69a668358e8ae76\" returns successfully" Sep 13 00:24:22.821203 env[1315]: time="2025-09-13T00:24:22.821177621Z" level=info msg="StopPodSandbox for \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\"" Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.864 [WARNING][5075] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0", GenerateName:"calico-kube-controllers-589698f46b-", Namespace:"calico-system", SelfLink:"", UID:"734b3a8a-120f-488b-a1eb-812d2e9a1288", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589698f46b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6", Pod:"calico-kube-controllers-589698f46b-2w2b2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie891b90763b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.865 [INFO][5075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.865 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" iface="eth0" netns="" Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.865 [INFO][5075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.865 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.915 [INFO][5083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" HandleID="k8s-pod-network.8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.915 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.916 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.925 [WARNING][5083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" HandleID="k8s-pod-network.8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.925 [INFO][5083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" HandleID="k8s-pod-network.8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.926 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:22.930555 env[1315]: 2025-09-13 00:24:22.929 [INFO][5075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:22.931088 env[1315]: time="2025-09-13T00:24:22.930591274Z" level=info msg="TearDown network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\" successfully" Sep 13 00:24:22.931088 env[1315]: time="2025-09-13T00:24:22.930623193Z" level=info msg="StopPodSandbox for \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\" returns successfully" Sep 13 00:24:22.932558 env[1315]: time="2025-09-13T00:24:22.932472505Z" level=info msg="RemovePodSandbox for \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\"" Sep 13 00:24:22.932558 env[1315]: time="2025-09-13T00:24:22.932519425Z" level=info msg="Forcibly stopping sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\"" Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:22.983 [WARNING][5122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0", GenerateName:"calico-kube-controllers-589698f46b-", Namespace:"calico-system", SelfLink:"", UID:"734b3a8a-120f-488b-a1eb-812d2e9a1288", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589698f46b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dff18f49c2c74ad484dd771c640ee002622861a71a1a928e75f4759d8a2ae7a6", Pod:"calico-kube-controllers-589698f46b-2w2b2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie891b90763b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:22.983 [INFO][5122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:22.983 [INFO][5122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" iface="eth0" netns="" Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:22.983 [INFO][5122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:22.983 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:23.008 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" HandleID="k8s-pod-network.8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:23.008 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:23.008 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:23.018 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" HandleID="k8s-pod-network.8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:23.019 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" HandleID="k8s-pod-network.8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Workload="localhost-k8s-calico--kube--controllers--589698f46b--2w2b2-eth0" Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:23.023 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.028523 env[1315]: 2025-09-13 00:24:23.025 [INFO][5122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007" Sep 13 00:24:23.029002 env[1315]: time="2025-09-13T00:24:23.028557021Z" level=info msg="TearDown network for sandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\" successfully" Sep 13 00:24:23.031849 env[1315]: time="2025-09-13T00:24:23.031810006Z" level=info msg="RemovePodSandbox \"8abcb031e88824223e9f0a1bb44dc53ef9e6e47b1291b7ea30b33b3318bfe007\" returns successfully" Sep 13 00:24:23.032314 env[1315]: time="2025-09-13T00:24:23.032281364Z" level=info msg="StopPodSandbox for \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\"" Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.066 [WARNING][5148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c0c72a36-f574-485d-b83f-4271860bd697", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5", Pod:"coredns-7c65d6cfc9-c5t9w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid587c6b0ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.067 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.067 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" iface="eth0" netns="" Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.067 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.067 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.086 [INFO][5157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" HandleID="k8s-pod-network.4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.086 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.086 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.100 [WARNING][5157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" HandleID="k8s-pod-network.4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.100 [INFO][5157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" HandleID="k8s-pod-network.4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.102 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.109610 env[1315]: 2025-09-13 00:24:23.106 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:24:23.110168 env[1315]: time="2025-09-13T00:24:23.110129048Z" level=info msg="TearDown network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\" successfully" Sep 13 00:24:23.110237 env[1315]: time="2025-09-13T00:24:23.110221087Z" level=info msg="StopPodSandbox for \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\" returns successfully" Sep 13 00:24:23.110818 env[1315]: time="2025-09-13T00:24:23.110789445Z" level=info msg="RemovePodSandbox for \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\"" Sep 13 00:24:23.110891 env[1315]: time="2025-09-13T00:24:23.110830445Z" level=info msg="Forcibly stopping sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\"" Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.147 [WARNING][5175] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c0c72a36-f574-485d-b83f-4271860bd697", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d82e1fb550b739efcdd4fb527bcd9ef55a09510d9857b5682bff4ced79c75ca5", Pod:"coredns-7c65d6cfc9-c5t9w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid587c6b0ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.147 [INFO][5175] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.147 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" iface="eth0" netns="" Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.147 [INFO][5175] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.147 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.167 [INFO][5185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" HandleID="k8s-pod-network.4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.168 [INFO][5185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.168 [INFO][5185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.177 [WARNING][5185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" HandleID="k8s-pod-network.4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.177 [INFO][5185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" HandleID="k8s-pod-network.4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Workload="localhost-k8s-coredns--7c65d6cfc9--c5t9w-eth0" Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.178 [INFO][5185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.181490 env[1315]: 2025-09-13 00:24:23.180 [INFO][5175] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241" Sep 13 00:24:23.181941 env[1315]: time="2025-09-13T00:24:23.181514681Z" level=info msg="TearDown network for sandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\" successfully" Sep 13 00:24:23.184652 env[1315]: time="2025-09-13T00:24:23.184618627Z" level=info msg="RemovePodSandbox \"4a043352ac4b8c09ffa613551b51ed7b92d78d0211b083adb4dace3851127241\" returns successfully" Sep 13 00:24:23.185209 env[1315]: time="2025-09-13T00:24:23.185168704Z" level=info msg="StopPodSandbox for \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\"" Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.215 [WARNING][5203] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--r22kc-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"893d60eb-0d9b-45af-8fda-3d0f54249b41", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b", Pod:"goldmane-7988f88666-r22kc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4447082197a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.215 [INFO][5203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.215 [INFO][5203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" iface="eth0" netns="" Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.215 [INFO][5203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.215 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.233 [INFO][5212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" HandleID="k8s-pod-network.b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.233 [INFO][5212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.233 [INFO][5212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.241 [WARNING][5212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" HandleID="k8s-pod-network.b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.241 [INFO][5212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" HandleID="k8s-pod-network.b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.242 [INFO][5212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.246168 env[1315]: 2025-09-13 00:24:23.244 [INFO][5203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:23.246620 env[1315]: time="2025-09-13T00:24:23.246200625Z" level=info msg="TearDown network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\" successfully" Sep 13 00:24:23.246620 env[1315]: time="2025-09-13T00:24:23.246233825Z" level=info msg="StopPodSandbox for \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\" returns successfully" Sep 13 00:24:23.247054 env[1315]: time="2025-09-13T00:24:23.247029341Z" level=info msg="RemovePodSandbox for \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\"" Sep 13 00:24:23.247193 env[1315]: time="2025-09-13T00:24:23.247155541Z" level=info msg="Forcibly stopping sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\"" Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.280 [WARNING][5230] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--r22kc-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"893d60eb-0d9b-45af-8fda-3d0f54249b41", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09c1ff88656931f39490731deccf47c33f95c85f5809c86738e12628738cb67b", Pod:"goldmane-7988f88666-r22kc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4447082197a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.280 [INFO][5230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.280 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" iface="eth0" netns="" Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.280 [INFO][5230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.280 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.299 [INFO][5239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" HandleID="k8s-pod-network.b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.299 [INFO][5239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.299 [INFO][5239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.308 [WARNING][5239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" HandleID="k8s-pod-network.b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.308 [INFO][5239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" HandleID="k8s-pod-network.b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Workload="localhost-k8s-goldmane--7988f88666--r22kc-eth0" Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.309 [INFO][5239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.313083 env[1315]: 2025-09-13 00:24:23.311 [INFO][5230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6" Sep 13 00:24:23.313534 env[1315]: time="2025-09-13T00:24:23.313109319Z" level=info msg="TearDown network for sandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\" successfully" Sep 13 00:24:23.316418 env[1315]: time="2025-09-13T00:24:23.316377024Z" level=info msg="RemovePodSandbox \"b4b2becf2f85075c12fbd05cabd7d067f99db1e3044159f8452a024a7ec9b3e6\" returns successfully" Sep 13 00:24:23.317023 env[1315]: time="2025-09-13T00:24:23.317000061Z" level=info msg="StopPodSandbox for \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\"" Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.351 [WARNING][5257] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0", GenerateName:"calico-apiserver-59db8f9d95-", Namespace:"calico-apiserver", SelfLink:"", UID:"e90ef52a-67ed-4ab0-b978-c57c4259dadf", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59db8f9d95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370", Pod:"calico-apiserver-59db8f9d95-s27wv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31cd1eb8a51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.352 [INFO][5257] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.352 [INFO][5257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" iface="eth0" netns="" Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.352 [INFO][5257] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.352 [INFO][5257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.370 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" HandleID="k8s-pod-network.fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.370 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.370 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.382 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" HandleID="k8s-pod-network.fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.382 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" HandleID="k8s-pod-network.fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.383 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.394432 env[1315]: 2025-09-13 00:24:23.390 [INFO][5257] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:23.394432 env[1315]: time="2025-09-13T00:24:23.392896433Z" level=info msg="TearDown network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\" successfully" Sep 13 00:24:23.394432 env[1315]: time="2025-09-13T00:24:23.392927273Z" level=info msg="StopPodSandbox for \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\" returns successfully" Sep 13 00:24:23.394432 env[1315]: time="2025-09-13T00:24:23.393805629Z" level=info msg="RemovePodSandbox for \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\"" Sep 13 00:24:23.394432 env[1315]: time="2025-09-13T00:24:23.393835269Z" level=info msg="Forcibly stopping sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\"" Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.429 [WARNING][5285] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0", GenerateName:"calico-apiserver-59db8f9d95-", Namespace:"calico-apiserver", SelfLink:"", UID:"e90ef52a-67ed-4ab0-b978-c57c4259dadf", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59db8f9d95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8075c630401d775c60cd897a63a303d38785a79bb38a023f7a019b0856be7370", Pod:"calico-apiserver-59db8f9d95-s27wv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31cd1eb8a51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.429 [INFO][5285] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.429 [INFO][5285] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" iface="eth0" netns="" Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.429 [INFO][5285] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.429 [INFO][5285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.451 [INFO][5293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" HandleID="k8s-pod-network.fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.451 [INFO][5293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.451 [INFO][5293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.460 [WARNING][5293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" HandleID="k8s-pod-network.fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.460 [INFO][5293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" HandleID="k8s-pod-network.fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Workload="localhost-k8s-calico--apiserver--59db8f9d95--s27wv-eth0" Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.462 [INFO][5293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.465578 env[1315]: 2025-09-13 00:24:23.463 [INFO][5285] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440" Sep 13 00:24:23.466667 env[1315]: time="2025-09-13T00:24:23.465628421Z" level=info msg="TearDown network for sandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\" successfully" Sep 13 00:24:23.470111 env[1315]: time="2025-09-13T00:24:23.470058640Z" level=info msg="RemovePodSandbox \"fd6d29ba8adb8129d34334ec668786a91c11c18fb07dd89d29c6174813676440\" returns successfully" Sep 13 00:24:23.470675 env[1315]: time="2025-09-13T00:24:23.470626718Z" level=info msg="StopPodSandbox for \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\"" Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.502 [WARNING][5311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0", GenerateName:"calico-apiserver-59db8f9d95-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5f5c1be-6355-4194-9496-65fe9b497b32", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59db8f9d95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba", Pod:"calico-apiserver-59db8f9d95-nhnkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida4c2c542b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.502 [INFO][5311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.502 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" iface="eth0" netns="" Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.502 [INFO][5311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.502 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.522 [INFO][5320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" HandleID="k8s-pod-network.1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.523 [INFO][5320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.523 [INFO][5320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.531 [WARNING][5320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" HandleID="k8s-pod-network.1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.531 [INFO][5320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" HandleID="k8s-pod-network.1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.532 [INFO][5320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.536327 env[1315]: 2025-09-13 00:24:23.534 [INFO][5311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:23.537271 env[1315]: time="2025-09-13T00:24:23.536350617Z" level=info msg="TearDown network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\" successfully" Sep 13 00:24:23.537271 env[1315]: time="2025-09-13T00:24:23.536392897Z" level=info msg="StopPodSandbox for \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\" returns successfully" Sep 13 00:24:23.537271 env[1315]: time="2025-09-13T00:24:23.536783855Z" level=info msg="RemovePodSandbox for \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\"" Sep 13 00:24:23.537271 env[1315]: time="2025-09-13T00:24:23.536813175Z" level=info msg="Forcibly stopping sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\"" Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.577 [WARNING][5339] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0", GenerateName:"calico-apiserver-59db8f9d95-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5f5c1be-6355-4194-9496-65fe9b497b32", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59db8f9d95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed8045dd6619ceb655d168423eeec1a40379d8dc12d107c14e7e8b93038694ba", Pod:"calico-apiserver-59db8f9d95-nhnkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida4c2c542b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.577 [INFO][5339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.577 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" iface="eth0" netns="" Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.577 [INFO][5339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.577 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.594 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" HandleID="k8s-pod-network.1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.594 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.594 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.602 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" HandleID="k8s-pod-network.1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.602 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" HandleID="k8s-pod-network.1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Workload="localhost-k8s-calico--apiserver--59db8f9d95--nhnkq-eth0" Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.604 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.607370 env[1315]: 2025-09-13 00:24:23.605 [INFO][5339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411" Sep 13 00:24:23.607828 env[1315]: time="2025-09-13T00:24:23.607416972Z" level=info msg="TearDown network for sandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\" successfully" Sep 13 00:24:23.610597 env[1315]: time="2025-09-13T00:24:23.610553437Z" level=info msg="RemovePodSandbox \"1f35649694a46aa5b71105e17b7d79c9641386e0721bfc02bea982e4f9277411\" returns successfully" Sep 13 00:24:23.611058 env[1315]: time="2025-09-13T00:24:23.611033755Z" level=info msg="StopPodSandbox for \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\"" Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.643 [WARNING][5366] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" WorkloadEndpoint="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.643 [INFO][5366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.643 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" iface="eth0" netns="" Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.643 [INFO][5366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.643 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.663 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" HandleID="k8s-pod-network.197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Workload="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.663 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.664 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.673 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" HandleID="k8s-pod-network.197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Workload="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.673 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" HandleID="k8s-pod-network.197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Workload="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.675 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.678871 env[1315]: 2025-09-13 00:24:23.676 [INFO][5366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:24:23.678871 env[1315]: time="2025-09-13T00:24:23.678812245Z" level=info msg="TearDown network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\" successfully" Sep 13 00:24:23.678871 env[1315]: time="2025-09-13T00:24:23.678842645Z" level=info msg="StopPodSandbox for \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\" returns successfully" Sep 13 00:24:23.681503 env[1315]: time="2025-09-13T00:24:23.681474753Z" level=info msg="RemovePodSandbox for \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\"" Sep 13 00:24:23.681567 env[1315]: time="2025-09-13T00:24:23.681511912Z" level=info msg="Forcibly stopping sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\"" Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.712 [WARNING][5392] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" WorkloadEndpoint="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.713 [INFO][5392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.713 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" iface="eth0" netns="" Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.713 [INFO][5392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.713 [INFO][5392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.744 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" HandleID="k8s-pod-network.197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Workload="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.744 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.744 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.755 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" HandleID="k8s-pod-network.197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Workload="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.755 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" HandleID="k8s-pod-network.197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Workload="localhost-k8s-whisker--56898f466d--kk7x6-eth0" Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.757 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.761037 env[1315]: 2025-09-13 00:24:23.759 [INFO][5392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d" Sep 13 00:24:23.761645 env[1315]: time="2025-09-13T00:24:23.761073948Z" level=info msg="TearDown network for sandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\" successfully" Sep 13 00:24:23.764317 env[1315]: time="2025-09-13T00:24:23.763918335Z" level=info msg="RemovePodSandbox \"197442835eb3404451b480de7b90c24680c551c381ded7a8d95401d33b142c8d\" returns successfully" Sep 13 00:24:23.764518 env[1315]: time="2025-09-13T00:24:23.764442133Z" level=info msg="StopPodSandbox for \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\"" Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.855 [WARNING][5418] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xhmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdf08d6b-aedb-443c-a2b0-45b46a85e022", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c", Pod:"csi-node-driver-7xhmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd6d3a3941b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.855 [INFO][5418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.855 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" iface="eth0" netns="" Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.855 [INFO][5418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.855 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.878 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" HandleID="k8s-pod-network.4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.878 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.879 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.895 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" HandleID="k8s-pod-network.4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.895 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" HandleID="k8s-pod-network.4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.897 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:23.902768 env[1315]: 2025-09-13 00:24:23.899 [INFO][5418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:23.903201 env[1315]: time="2025-09-13T00:24:23.902796500Z" level=info msg="TearDown network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\" successfully" Sep 13 00:24:23.903201 env[1315]: time="2025-09-13T00:24:23.902828499Z" level=info msg="StopPodSandbox for \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\" returns successfully" Sep 13 00:24:23.903323 env[1315]: time="2025-09-13T00:24:23.903283497Z" level=info msg="RemovePodSandbox for \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\"" Sep 13 00:24:23.903362 env[1315]: time="2025-09-13T00:24:23.903326977Z" level=info msg="Forcibly stopping sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\"" Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:23.967 [WARNING][5445] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xhmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdf08d6b-aedb-443c-a2b0-45b46a85e022", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed1007ecc4eb017bf12be6315db44d82a5f3e17b1af676f8dc03390ce32a476c", Pod:"csi-node-driver-7xhmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd6d3a3941b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:23.967 [INFO][5445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:23.967 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" iface="eth0" netns="" Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:23.967 [INFO][5445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:23.967 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:24.001 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" HandleID="k8s-pod-network.4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:24.002 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:24.002 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:24.011 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" HandleID="k8s-pod-network.4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:24.012 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" HandleID="k8s-pod-network.4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Workload="localhost-k8s-csi--node--driver--7xhmc-eth0" Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:24.017 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:24.033917 env[1315]: 2025-09-13 00:24:24.028 [INFO][5445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a" Sep 13 00:24:24.039522 env[1315]: time="2025-09-13T00:24:24.039477236Z" level=info msg="TearDown network for sandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\" successfully" Sep 13 00:24:24.044271 env[1315]: time="2025-09-13T00:24:24.044236175Z" level=info msg="RemovePodSandbox \"4b3e9195fd5bab57f12a196924a8e07776f17489d2d2c55b0af3308ad8fc739a\" returns successfully" Sep 13 00:24:24.283516 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:47406.service. Sep 13 00:24:24.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.117:22-10.0.0.1:47406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:24.287017 kernel: kauditd_printk_skb: 29 callbacks suppressed Sep 13 00:24:24.287107 kernel: audit: type=1130 audit(1757723064.282:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.117:22-10.0.0.1:47406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:24.325000 audit[5461]: USER_ACCT pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.326787 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 47406 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:24.328426 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:24.326000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.331647 kernel: audit: type=1101 audit(1757723064.325:465): pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.331710 kernel: audit: type=1103 audit(1757723064.326:466): pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.331736 kernel: audit: type=1006 audit(1757723064.326:467): pid=5461 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 13 00:24:24.326000 audit[5461]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff381c240 a2=3 a3=1 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:24.338261 kernel: audit: type=1300 audit(1757723064.326:467): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff381c240 a2=3 a3=1 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:24.338342 kernel: audit: type=1327 audit(1757723064.326:467): proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:24.326000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:24.336941 systemd-logind[1302]: New session 13 of user core. Sep 13 00:24:24.337773 systemd[1]: Started session-13.scope. Sep 13 00:24:24.348000 audit[5461]: USER_START pid=5461 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.353543 kernel: audit: type=1105 audit(1757723064.348:468): pid=5461 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.352000 audit[5464]: CRED_ACQ pid=5464 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.357503 kernel: audit: type=1103 audit(1757723064.352:469): pid=5464 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.579090 sshd[5461]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:24.578000 audit[5461]: USER_END pid=5461 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.581714 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:47406.service: Deactivated successfully. Sep 13 00:24:24.582562 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:24:24.578000 audit[5461]: CRED_DISP pid=5461 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.584020 systemd-logind[1302]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:24:24.584883 systemd-logind[1302]: Removed session 13. Sep 13 00:24:24.586068 kernel: audit: type=1106 audit(1757723064.578:470): pid=5461 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.586131 kernel: audit: type=1104 audit(1757723064.578:471): pid=5461 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:24.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.117:22-10.0.0.1:47406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:29.582123 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:47408.service. Sep 13 00:24:29.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.117:22-10.0.0.1:47408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:29.585176 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:24:29.585238 kernel: audit: type=1130 audit(1757723069.580:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.117:22-10.0.0.1:47408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:29.613000 audit[5502]: USER_ACCT pid=5502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.614969 sshd[5502]: Accepted publickey for core from 10.0.0.1 port 47408 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:29.616480 sshd[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:29.614000 audit[5502]: CRED_ACQ pid=5502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.619832 kernel: audit: type=1101 audit(1757723069.613:474): pid=5502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.619883 kernel: audit: type=1103 audit(1757723069.614:475): pid=5502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.619913 kernel: audit: type=1006 audit(1757723069.614:476): pid=5502 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 13 00:24:29.614000 audit[5502]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb6ff100 a2=3 a3=1 items=0 ppid=1 pid=5502 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:29.621267 systemd-logind[1302]: New session 14 of user core. Sep 13 00:24:29.621842 systemd[1]: Started session-14.scope. Sep 13 00:24:29.623948 kernel: audit: type=1300 audit(1757723069.614:476): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb6ff100 a2=3 a3=1 items=0 ppid=1 pid=5502 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:29.623996 kernel: audit: type=1327 audit(1757723069.614:476): proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:29.614000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:29.624000 audit[5502]: USER_START pid=5502 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.625000 audit[5505]: CRED_ACQ pid=5505 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.632134 kernel: audit: type=1105 audit(1757723069.624:477): pid=5502 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.632182 kernel: audit: type=1103 audit(1757723069.625:478): pid=5505 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.767108 sshd[5502]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:29.774518 kernel: audit: type=1106 audit(1757723069.767:479): pid=5502 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.774584 kernel: audit: type=1104 audit(1757723069.767:480): pid=5502 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.767000 audit[5502]: USER_END pid=5502 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.767000 audit[5502]: CRED_DISP pid=5502 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:29.775694 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:47408.service: Deactivated successfully. Sep 13 00:24:29.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.117:22-10.0.0.1:47408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:29.776725 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:24:29.776892 systemd-logind[1302]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:24:29.782140 systemd-logind[1302]: Removed session 14. Sep 13 00:24:34.770299 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:50538.service. Sep 13 00:24:34.773727 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:24:34.773813 kernel: audit: type=1130 audit(1757723074.769:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.117:22-10.0.0.1:50538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:34.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.117:22-10.0.0.1:50538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:34.809000 audit[5523]: USER_ACCT pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.811372 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 50538 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:34.812474 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:34.810000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.816263 kernel: audit: type=1101 audit(1757723074.809:483): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.817447 kernel: audit: type=1103 audit(1757723074.810:484): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.817473 kernel: audit: type=1006 audit(1757723074.810:485): pid=5523 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 13 00:24:34.817182 systemd[1]: Started session-15.scope. Sep 13 00:24:34.818069 kernel: audit: type=1300 audit(1757723074.810:485): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe392a380 a2=3 a3=1 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:34.810000 audit[5523]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe392a380 a2=3 a3=1 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:34.817619 systemd-logind[1302]: New session 15 of user core. Sep 13 00:24:34.820480 kernel: audit: type=1327 audit(1757723074.810:485): proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:34.810000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:34.822792 kernel: audit: type=1105 audit(1757723074.820:486): pid=5523 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.820000 audit[5523]: USER_START pid=5523 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.822000 audit[5526]: CRED_ACQ pid=5526 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.830758 kernel: audit: type=1103 audit(1757723074.822:487): pid=5526 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.986942 sshd[5523]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:34.986000 audit[5523]: USER_END pid=5523 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.986000 audit[5523]: CRED_DISP pid=5523 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.993549 kernel: audit: type=1106 audit(1757723074.986:488): pid=5523 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.993673 kernel: audit: type=1104 audit(1757723074.986:489): pid=5523 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:34.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.117:22-10.0.0.1:50538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:34.995763 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:50538.service: Deactivated successfully. Sep 13 00:24:34.996571 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:24:34.996768 systemd-logind[1302]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:24:34.998240 systemd-logind[1302]: Removed session 15. Sep 13 00:24:35.568030 kubelet[2096]: E0913 00:24:35.567904 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:37.759000 audit[5562]: NETFILTER_CFG table=filter:129 family=2 entries=9 op=nft_register_rule pid=5562 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:37.759000 audit[5562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffd460efc0 a2=0 a3=1 items=0 ppid=2207 pid=5562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:37.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:37.766000 audit[5562]: NETFILTER_CFG table=nat:130 family=2 entries=31 op=nft_register_chain pid=5562 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:37.766000 audit[5562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffd460efc0 a2=0 a3=1 items=0 ppid=2207 pid=5562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:37.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:39.989554 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:34052.service. Sep 13 00:24:39.990614 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:24:39.990682 kernel: audit: type=1130 audit(1757723079.989:493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.117:22-10.0.0.1:34052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:39.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.117:22-10.0.0.1:34052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:40.025000 audit[5563]: USER_ACCT pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.026051 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 34052 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:40.029403 kernel: audit: type=1101 audit(1757723080.025:494): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.029000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.029936 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:40.033469 kernel: audit: type=1103 audit(1757723080.029:495): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.033538 kernel: audit: type=1006 audit(1757723080.029:496): pid=5563 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 13 00:24:40.033564 kernel: audit: type=1300 audit(1757723080.029:496): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed500620 a2=3 a3=1 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:40.029000 audit[5563]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed500620 a2=3 a3=1 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:40.035184 systemd-logind[1302]: New session 16 of user core. Sep 13 00:24:40.035846 systemd[1]: Started session-16.scope. Sep 13 00:24:40.036262 kernel: audit: type=1327 audit(1757723080.029:496): proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:40.029000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:40.041000 audit[5563]: USER_START pid=5563 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.042000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.048101 kernel: audit: type=1105 audit(1757723080.041:497): pid=5563 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.048174 kernel: audit: type=1103 audit(1757723080.042:498): pid=5566 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.207586 sshd[5563]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:40.208000 audit[5563]: USER_END pid=5563 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.208000 audit[5563]: CRED_DISP pid=5563 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.209817 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:34056.service. Sep 13 00:24:40.212352 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:34052.service: Deactivated successfully. Sep 13 00:24:40.214415 kernel: audit: type=1106 audit(1757723080.208:499): pid=5563 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.214489 kernel: audit: type=1104 audit(1757723080.208:500): pid=5563 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.117:22-10.0.0.1:34056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:40.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.117:22-10.0.0.1:34052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:40.213156 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:24:40.214315 systemd-logind[1302]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:24:40.215187 systemd-logind[1302]: Removed session 16. Sep 13 00:24:40.242000 audit[5575]: USER_ACCT pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.243230 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 34056 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:40.243000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.244000 audit[5575]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0495cb0 a2=3 a3=1 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:40.244000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:40.244745 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:40.248573 systemd-logind[1302]: New session 17 of user core. Sep 13 00:24:40.248701 systemd[1]: Started session-17.scope. Sep 13 00:24:40.251000 audit[5575]: USER_START pid=5575 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.253000 audit[5580]: CRED_ACQ pid=5580 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.463585 sshd[5575]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:40.464000 audit[5575]: USER_END pid=5575 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.464000 audit[5575]: CRED_DISP pid=5575 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.465868 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:34066.service. Sep 13 00:24:40.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.117:22-10.0.0.1:34066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:40.469641 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:34056.service: Deactivated successfully. Sep 13 00:24:40.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.117:22-10.0.0.1:34056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:40.470585 systemd-logind[1302]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:24:40.470622 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:24:40.471332 systemd-logind[1302]: Removed session 17. Sep 13 00:24:40.508000 audit[5587]: USER_ACCT pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.508892 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 34066 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:40.510000 audit[5587]: CRED_ACQ pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.510000 audit[5587]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3f49b90 a2=3 a3=1 items=0 ppid=1 pid=5587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:40.510000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:40.511244 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:40.514792 systemd-logind[1302]: New session 18 of user core. Sep 13 00:24:40.515599 systemd[1]: Started session-18.scope. Sep 13 00:24:40.519000 audit[5587]: USER_START pid=5587 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:40.520000 audit[5592]: CRED_ACQ pid=5592 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.190000 audit[5604]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:42.190000 audit[5604]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffd44980d0 a2=0 a3=1 items=0 ppid=2207 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:42.190000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:42.197000 audit[5604]: NETFILTER_CFG table=nat:132 family=2 entries=26 op=nft_register_rule pid=5604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:42.197000 audit[5604]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffd44980d0 a2=0 a3=1 items=0 ppid=2207 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:42.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:42.199838 sshd[5587]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:42.201167 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:34078.service. Sep 13 00:24:42.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.117:22-10.0.0.1:34078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:42.202000 audit[5587]: USER_END pid=5587 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.202000 audit[5587]: CRED_DISP pid=5587 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.204866 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:34066.service: Deactivated successfully. Sep 13 00:24:42.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.117:22-10.0.0.1:34066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:42.206090 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:24:42.207569 systemd-logind[1302]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:24:42.211186 systemd-logind[1302]: Removed session 18. Sep 13 00:24:42.220000 audit[5610]: NETFILTER_CFG table=filter:133 family=2 entries=32 op=nft_register_rule pid=5610 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:42.220000 audit[5610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe6c7e920 a2=0 a3=1 items=0 ppid=2207 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:42.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:42.225000 audit[5610]: NETFILTER_CFG table=nat:134 family=2 entries=26 op=nft_register_rule pid=5610 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:42.225000 audit[5610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe6c7e920 a2=0 a3=1 items=0 ppid=2207 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:42.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:42.241000 audit[5605]: USER_ACCT pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.242421 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 34078 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:42.243000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.243000 audit[5605]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6acf800 a2=3 a3=1 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:42.243000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:42.243952 sshd[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:42.249278 systemd-logind[1302]: New session 19 of user core. Sep 13 00:24:42.250032 systemd[1]: Started session-19.scope. Sep 13 00:24:42.254000 audit[5605]: USER_START pid=5605 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.257000 audit[5612]: CRED_ACQ pid=5612 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.716555 sshd[5605]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:42.718757 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:34084.service. Sep 13 00:24:42.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.117:22-10.0.0.1:34084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:42.720000 audit[5605]: USER_END pid=5605 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.720000 audit[5605]: CRED_DISP pid=5605 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.722624 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:34078.service: Deactivated successfully. Sep 13 00:24:42.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.117:22-10.0.0.1:34078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:42.723888 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:24:42.724334 systemd-logind[1302]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:24:42.728645 systemd-logind[1302]: Removed session 19. Sep 13 00:24:42.770804 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 34084 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:42.770000 audit[5620]: USER_ACCT pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.772077 sshd[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:42.771000 audit[5620]: CRED_ACQ pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.771000 audit[5620]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1541ab0 a2=3 a3=1 items=0 ppid=1 pid=5620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:42.771000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:42.776708 systemd[1]: Started session-20.scope. Sep 13 00:24:42.777034 systemd-logind[1302]: New session 20 of user core. Sep 13 00:24:42.780000 audit[5620]: USER_START pid=5620 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.782000 audit[5625]: CRED_ACQ pid=5625 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.939098 sshd[5620]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:42.939000 audit[5620]: USER_END pid=5620 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.939000 audit[5620]: CRED_DISP pid=5620 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:42.941849 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:34084.service: Deactivated successfully. Sep 13 00:24:42.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.117:22-10.0.0.1:34084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:42.944183 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:24:42.944698 systemd-logind[1302]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:24:42.950138 systemd-logind[1302]: Removed session 20. Sep 13 00:24:47.567517 kubelet[2096]: E0913 00:24:47.567471 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:47.635000 audit[5640]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:47.636443 kernel: kauditd_printk_skb: 57 callbacks suppressed Sep 13 00:24:47.636493 kernel: audit: type=1325 audit(1757723087.635:542): table=filter:135 family=2 entries=20 op=nft_register_rule pid=5640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:47.635000 audit[5640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc56c45a0 a2=0 a3=1 items=0 ppid=2207 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:47.642865 kernel: audit: type=1300 audit(1757723087.635:542): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc56c45a0 a2=0 a3=1 items=0 ppid=2207 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:47.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:47.645109 kernel: audit: type=1327 audit(1757723087.635:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:47.647000 audit[5640]: NETFILTER_CFG table=nat:136 family=2 entries=110 op=nft_register_chain pid=5640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:47.647000 audit[5640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffc56c45a0 a2=0 a3=1 items=0 ppid=2207 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:47.654706 kernel: audit: type=1325 audit(1757723087.647:543): table=nat:136 family=2 entries=110 op=nft_register_chain pid=5640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:24:47.654758 kernel: audit: type=1300 audit(1757723087.647:543): arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffc56c45a0 a2=0 a3=1 items=0 ppid=2207 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:47.654798 kernel: audit: type=1327 audit(1757723087.647:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:47.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:24:47.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.117:22-10.0.0.1:34086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:47.944228 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:34086.service. Sep 13 00:24:47.947405 kernel: audit: type=1130 audit(1757723087.943:544): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.117:22-10.0.0.1:34086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:47.978000 audit[5642]: USER_ACCT pid=5642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:47.979475 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 34086 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:47.980950 sshd[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:47.980000 audit[5642]: CRED_ACQ pid=5642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:47.984604 kernel: audit: type=1101 audit(1757723087.978:545): pid=5642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:47.984677 kernel: audit: type=1103 audit(1757723087.980:546): pid=5642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:47.984701 kernel: audit: type=1006 audit(1757723087.980:547): pid=5642 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Sep 13 00:24:47.980000 audit[5642]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8b24b70 a2=3 a3=1 items=0 ppid=1 pid=5642 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:47.980000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:47.988692 systemd-logind[1302]: New session 21 of user core. Sep 13 00:24:47.989618 systemd[1]: Started session-21.scope. Sep 13 00:24:47.999000 audit[5642]: USER_START pid=5642 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:48.001000 audit[5645]: CRED_ACQ pid=5645 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:48.117367 sshd[5642]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:48.117000 audit[5642]: USER_END pid=5642 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:48.118000 audit[5642]: CRED_DISP pid=5642 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:48.120064 systemd-logind[1302]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:24:48.120197 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:34086.service: Deactivated successfully. Sep 13 00:24:48.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.117:22-10.0.0.1:34086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:48.121043 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:24:48.121504 systemd-logind[1302]: Removed session 21. Sep 13 00:24:50.568055 kubelet[2096]: E0913 00:24:50.568020 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:52.569663 kubelet[2096]: E0913 00:24:52.569633 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:53.120502 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:54212.service. Sep 13 00:24:53.124010 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:24:53.124098 kernel: audit: type=1130 audit(1757723093.120:553): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.117:22-10.0.0.1:54212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:53.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.117:22-10.0.0.1:54212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:53.170000 audit[5676]: USER_ACCT pid=5676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.170759 sshd[5676]: Accepted publickey for core from 10.0.0.1 port 54212 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:53.172336 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:53.171000 audit[5676]: CRED_ACQ pid=5676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.175599 kernel: audit: type=1101 audit(1757723093.170:554): pid=5676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.175682 kernel: audit: type=1103 audit(1757723093.171:555): pid=5676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.175705 kernel: audit: type=1006 audit(1757723093.171:556): pid=5676 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 13 00:24:53.171000 audit[5676]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0624730 a2=3 a3=1 items=0 ppid=1 pid=5676 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:53.179912 kernel: audit: type=1300 audit(1757723093.171:556): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0624730 a2=3 a3=1 items=0 ppid=1 pid=5676 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:53.171000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:53.181161 kernel: audit: type=1327 audit(1757723093.171:556): proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:53.184768 systemd[1]: Started session-22.scope. Sep 13 00:24:53.184986 systemd-logind[1302]: New session 22 of user core. Sep 13 00:24:53.189000 audit[5676]: USER_START pid=5676 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.190000 audit[5679]: CRED_ACQ pid=5679 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.194703 kernel: audit: type=1105 audit(1757723093.189:557): pid=5676 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.194772 kernel: audit: type=1103 audit(1757723093.190:558): pid=5679 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.312026 sshd[5676]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:53.312000 audit[5676]: USER_END pid=5676 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.314718 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:54212.service: Deactivated successfully. Sep 13 00:24:53.316157 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:24:53.312000 audit[5676]: CRED_DISP pid=5676 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.317177 systemd-logind[1302]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:24:53.317966 systemd-logind[1302]: Removed session 22. Sep 13 00:24:53.318740 kernel: audit: type=1106 audit(1757723093.312:559): pid=5676 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.318785 kernel: audit: type=1104 audit(1757723093.312:560): pid=5676 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:53.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.117:22-10.0.0.1:54212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:55.567562 kubelet[2096]: E0913 00:24:55.567518 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:58.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.117:22-10.0.0.1:54214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:58.316649 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:54214.service. Sep 13 00:24:58.317792 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:24:58.317821 kernel: audit: type=1130 audit(1757723098.316:562): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.117:22-10.0.0.1:54214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:58.353000 audit[5713]: USER_ACCT pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.353773 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 54214 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:24:58.357000 audit[5713]: CRED_ACQ pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.357990 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:24:58.361213 kernel: audit: type=1101 audit(1757723098.353:563): pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.361291 kernel: audit: type=1103 audit(1757723098.357:564): pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.361319 kernel: audit: type=1006 audit(1757723098.357:565): pid=5713 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 13 00:24:58.362576 systemd[1]: Started session-23.scope. Sep 13 00:24:58.362932 systemd-logind[1302]: New session 23 of user core. Sep 13 00:24:58.357000 audit[5713]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd66e50b0 a2=3 a3=1 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:58.369013 kernel: audit: type=1300 audit(1757723098.357:565): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd66e50b0 a2=3 a3=1 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:24:58.369073 kernel: audit: type=1327 audit(1757723098.357:565): proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:58.357000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:24:58.370235 kernel: audit: type=1105 audit(1757723098.367:566): pid=5713 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.367000 audit[5713]: USER_START pid=5713 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.369000 audit[5716]: CRED_ACQ pid=5716 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.376462 kernel: audit: type=1103 audit(1757723098.369:567): pid=5716 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.476565 sshd[5713]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:58.478000 audit[5713]: USER_END pid=5713 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.480488 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:54214.service: Deactivated successfully. Sep 13 00:24:58.481309 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:24:58.478000 audit[5713]: CRED_DISP pid=5713 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.484962 kernel: audit: type=1106 audit(1757723098.478:568): pid=5713 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.485025 kernel: audit: type=1104 audit(1757723098.478:569): pid=5713 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:24:58.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.117:22-10.0.0.1:54214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:24:58.485458 systemd-logind[1302]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:24:58.486397 systemd-logind[1302]: Removed session 23. Sep 13 00:25:03.481232 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:54162.service. Sep 13 00:25:03.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.117:22-10.0.0.1:54162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:25:03.484829 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:25:03.484906 kernel: audit: type=1130 audit(1757723103.481:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.117:22-10.0.0.1:54162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:25:03.512000 audit[5749]: USER_ACCT pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.513250 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 54162 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:25:03.514813 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:25:03.514000 audit[5749]: CRED_ACQ pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.518360 kernel: audit: type=1101 audit(1757723103.512:572): pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.518433 kernel: audit: type=1103 audit(1757723103.514:573): pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.519966 kernel: audit: type=1006 audit(1757723103.514:574): pid=5749 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 13 00:25:03.514000 audit[5749]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc003f4e0 a2=3 a3=1 items=0 ppid=1 pid=5749 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:25:03.520978 systemd[1]: Started session-24.scope. Sep 13 00:25:03.523211 kernel: audit: type=1300 audit(1757723103.514:574): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc003f4e0 a2=3 a3=1 items=0 ppid=1 pid=5749 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:25:03.522167 systemd-logind[1302]: New session 24 of user core. Sep 13 00:25:03.525409 kernel: audit: type=1327 audit(1757723103.514:574): proctitle=737368643A20636F7265205B707269765D Sep 13 00:25:03.514000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:25:03.527000 audit[5749]: USER_START pid=5749 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.530000 audit[5752]: CRED_ACQ pid=5752 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.533640 kernel: audit: type=1105 audit(1757723103.527:575): pid=5749 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.533682 kernel: audit: type=1103 audit(1757723103.530:576): pid=5752 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.650226 sshd[5749]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:03.651000 audit[5749]: USER_END pid=5749 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.655168 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:54162.service: Deactivated successfully. Sep 13 00:25:03.651000 audit[5749]: CRED_DISP pid=5749 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.656428 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:25:03.656445 systemd-logind[1302]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:25:03.657683 kernel: audit: type=1106 audit(1757723103.651:577): pid=5749 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.657761 kernel: audit: type=1104 audit(1757723103.651:578): pid=5749 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:25:03.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.117:22-10.0.0.1:54162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:25:03.657816 systemd-logind[1302]: Removed session 24.