Sep 5 23:59:13.679579 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:59:13.679598 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 5 23:59:13.679607 kernel: efi: EFI v2.70 by EDK II Sep 5 23:59:13.679612 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 5 23:59:13.679617 kernel: random: crng init done Sep 5 23:59:13.679623 kernel: ACPI: Early table checksum verification disabled Sep 5 23:59:13.679629 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 5 23:59:13.679636 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 5 23:59:13.679642 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:59:13.679647 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:59:13.679652 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:59:13.679658 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:59:13.679663 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:59:13.679669 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:59:13.679676 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:59:13.679682 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:59:13.679688 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:59:13.679694 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 5 23:59:13.679700 kernel: NUMA: Failed to initialise from firmware Sep 5 23:59:13.679706 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 23:59:13.679712 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Sep 5 23:59:13.679717 kernel: Zone ranges: Sep 5 23:59:13.679723 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 23:59:13.679730 kernel: DMA32 empty Sep 5 23:59:13.679736 kernel: Normal empty Sep 5 23:59:13.679741 kernel: Movable zone start for each node Sep 5 23:59:13.679747 kernel: Early memory node ranges Sep 5 23:59:13.679753 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 5 23:59:13.679758 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 5 23:59:13.679764 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 5 23:59:13.679770 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 5 23:59:13.679775 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 5 23:59:13.679781 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 5 23:59:13.679787 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 5 23:59:13.679793 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 23:59:13.679799 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 5 23:59:13.679805 kernel: psci: probing for conduit method from ACPI. Sep 5 23:59:13.679810 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:59:13.679816 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:59:13.679822 kernel: psci: Trusted OS migration not required Sep 5 23:59:13.679830 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:59:13.679837 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 23:59:13.679844 kernel: ACPI: SRAT not present Sep 5 23:59:13.679850 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 5 23:59:13.679856 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 5 23:59:13.679863 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 5 23:59:13.679869 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:59:13.679875 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:59:13.679881 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:59:13.679887 kernel: CPU features: detected: Spectre-v4 Sep 5 23:59:13.679893 kernel: CPU features: detected: Spectre-BHB Sep 5 23:59:13.679901 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:59:13.679914 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:59:13.679920 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:59:13.679926 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:59:13.679932 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 5 23:59:13.679938 kernel: Policy zone: DMA Sep 5 23:59:13.679945 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 5 23:59:13.679952 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:59:13.679958 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:59:13.679965 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:59:13.679971 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:59:13.679979 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114952K reserved, 0K cma-reserved) Sep 5 23:59:13.679985 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 23:59:13.679991 kernel: trace event string verifier disabled Sep 5 23:59:13.679997 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:59:13.680004 kernel: rcu: RCU event tracing is enabled. Sep 5 23:59:13.680011 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 23:59:13.680017 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:59:13.680023 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:59:13.680029 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:59:13.680035 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 23:59:13.680042 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:59:13.680049 kernel: GICv3: 256 SPIs implemented Sep 5 23:59:13.680055 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:59:13.680061 kernel: GICv3: Distributor has no Range Selector support Sep 5 23:59:13.680067 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:59:13.680073 kernel: GICv3: 16 PPIs implemented Sep 5 23:59:13.680079 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 23:59:13.680085 kernel: ACPI: SRAT not present Sep 5 23:59:13.680091 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 23:59:13.680097 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:59:13.680103 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:59:13.680109 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 5 23:59:13.680116 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 5 23:59:13.680123 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:59:13.680129 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:59:13.680136 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:59:13.680142 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:59:13.680148 kernel: arm-pv: using stolen time PV Sep 5 23:59:13.680154 kernel: Console: colour dummy device 80x25 Sep 5 23:59:13.680161 kernel: ACPI: Core revision 20210730 Sep 5 23:59:13.680167 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:59:13.680174 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:59:13.680180 kernel: LSM: Security Framework initializing Sep 5 23:59:13.680187 kernel: SELinux: Initializing. Sep 5 23:59:13.680194 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:59:13.680200 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:59:13.680206 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:59:13.680213 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 23:59:13.680219 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 23:59:13.680225 kernel: Remapping and enabling EFI services. Sep 5 23:59:13.680231 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:59:13.680237 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:59:13.680244 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 23:59:13.680251 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 5 23:59:13.680257 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:59:13.680263 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:59:13.680270 kernel: Detected PIPT I-cache on CPU2 Sep 5 23:59:13.680276 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 5 23:59:13.680283 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 5 23:59:13.680289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:59:13.680296 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 5 23:59:13.680302 kernel: Detected PIPT I-cache on CPU3 Sep 5 23:59:13.680309 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 5 23:59:13.680316 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 5 23:59:13.680322 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:59:13.680328 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 5 23:59:13.680338 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 23:59:13.680346 kernel: SMP: Total of 4 processors activated. Sep 5 23:59:13.680352 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:59:13.680359 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:59:13.680366 kernel: CPU features: detected: Common not Private translations Sep 5 23:59:13.680372 kernel: CPU features: detected: CRC32 instructions Sep 5 23:59:13.680379 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:59:13.680385 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:59:13.680393 kernel: CPU features: detected: Privileged Access Never Sep 5 23:59:13.680400 kernel: CPU features: detected: RAS Extension Support Sep 5 23:59:13.680407 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 23:59:13.680413 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:59:13.680420 kernel: alternatives: patching kernel code Sep 5 23:59:13.680427 kernel: devtmpfs: initialized Sep 5 23:59:13.680434 kernel: KASLR enabled Sep 5 23:59:13.680441 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:59:13.680447 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 23:59:13.680454 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:59:13.680460 kernel: SMBIOS 3.0.0 present. Sep 5 23:59:13.680467 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 5 23:59:13.680474 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:59:13.680481 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:59:13.680489 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:59:13.680495 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:59:13.680502 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:59:13.680509 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Sep 5 23:59:13.680515 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:59:13.680522 kernel: cpuidle: using governor menu Sep 5 23:59:13.680529 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:59:13.680543 kernel: ASID allocator initialised with 32768 entries Sep 5 23:59:13.680550 kernel: ACPI: bus type PCI registered Sep 5 23:59:13.680558 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:59:13.680565 kernel: Serial: AMBA PL011 UART driver Sep 5 23:59:13.680571 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:59:13.680578 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:59:13.680585 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:59:13.680591 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:59:13.680598 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 23:59:13.680605 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:59:13.680611 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:59:13.680619 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:59:13.680626 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:59:13.680632 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 5 23:59:13.680639 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 5 23:59:13.680645 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 5 23:59:13.680652 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:59:13.680659 kernel: ACPI: Interpreter enabled Sep 5 23:59:13.680665 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:59:13.680672 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:59:13.680680 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:59:13.680687 kernel: printk: console [ttyAMA0] enabled Sep 5 23:59:13.680693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 23:59:13.680814 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:59:13.680876 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:59:13.680942 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:59:13.681001 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 23:59:13.681089 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 23:59:13.681099 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 23:59:13.681106 kernel: PCI host bridge to bus 0000:00 Sep 5 23:59:13.681172 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 23:59:13.681227 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:59:13.681281 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 23:59:13.681332 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 23:59:13.681405 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 23:59:13.681475 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 23:59:13.681555 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 5 23:59:13.681620 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 5 23:59:13.681680 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:59:13.681793 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:59:13.681867 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 5 23:59:13.681947 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 5 23:59:13.682009 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 23:59:13.682062 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:59:13.682114 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 23:59:13.682123 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:59:13.682130 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:59:13.682137 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:59:13.682143 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:59:13.682151 kernel: iommu: Default domain type: Translated Sep 5 23:59:13.682158 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:59:13.682165 kernel: vgaarb: loaded Sep 5 23:59:13.682171 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 23:59:13.682178 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 23:59:13.682185 kernel: PTP clock support registered Sep 5 23:59:13.682192 kernel: Registered efivars operations Sep 5 23:59:13.682198 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:59:13.682205 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:59:13.682213 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:59:13.682220 kernel: pnp: PnP ACPI init Sep 5 23:59:13.682284 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 23:59:13.682294 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:59:13.682300 kernel: NET: Registered PF_INET protocol family Sep 5 23:59:13.682307 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:59:13.682314 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:59:13.682320 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:59:13.682329 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:59:13.682335 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 5 23:59:13.682342 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:59:13.682349 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:59:13.682356 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:59:13.682362 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:59:13.682369 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:59:13.682375 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 23:59:13.682382 kernel: kvm [1]: HYP mode not available Sep 5 23:59:13.682390 kernel: Initialise system trusted keyrings Sep 5 23:59:13.682396 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:59:13.682403 kernel: Key type asymmetric registered Sep 5 23:59:13.682409 kernel: Asymmetric key parser 'x509' registered Sep 5 23:59:13.682416 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 5 23:59:13.682422 kernel: io scheduler mq-deadline registered Sep 5 23:59:13.682429 kernel: io scheduler kyber registered Sep 5 23:59:13.682435 kernel: io scheduler bfq registered Sep 5 23:59:13.682442 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:59:13.682450 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:59:13.682457 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:59:13.682515 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 5 23:59:13.682524 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:59:13.682531 kernel: thunder_xcv, ver 1.0 Sep 5 23:59:13.682547 kernel: thunder_bgx, ver 1.0 Sep 5 23:59:13.682554 kernel: nicpf, ver 1.0 Sep 5 23:59:13.682561 kernel: nicvf, ver 1.0 Sep 5 23:59:13.682636 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:59:13.682696 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:59:13 UTC (1757116753) Sep 5 23:59:13.682705 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:59:13.682712 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:59:13.682719 kernel: Segment Routing with IPv6 Sep 5 23:59:13.682726 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:59:13.682732 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:59:13.682739 kernel: Key type dns_resolver registered Sep 5 23:59:13.682745 kernel: registered taskstats version 1 Sep 5 23:59:13.682753 kernel: Loading compiled-in X.509 certificates Sep 5 23:59:13.682760 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 5 23:59:13.682767 kernel: Key type .fscrypt registered Sep 5 23:59:13.682773 kernel: Key type fscrypt-provisioning registered Sep 5 23:59:13.682780 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:59:13.682786 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:59:13.682793 kernel: ima: No architecture policies found Sep 5 23:59:13.682799 kernel: clk: Disabling unused clocks Sep 5 23:59:13.682806 kernel: Freeing unused kernel memory: 36416K Sep 5 23:59:13.682814 kernel: Run /init as init process Sep 5 23:59:13.682820 kernel: with arguments: Sep 5 23:59:13.682827 kernel: /init Sep 5 23:59:13.682833 kernel: with environment: Sep 5 23:59:13.682839 kernel: HOME=/ Sep 5 23:59:13.682846 kernel: TERM=linux Sep 5 23:59:13.682852 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:59:13.682861 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 5 23:59:13.682870 systemd[1]: Detected virtualization kvm. Sep 5 23:59:13.682878 systemd[1]: Detected architecture arm64. Sep 5 23:59:13.682884 systemd[1]: Running in initrd. Sep 5 23:59:13.682891 systemd[1]: No hostname configured, using default hostname. Sep 5 23:59:13.682898 systemd[1]: Hostname set to . Sep 5 23:59:13.682913 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:59:13.682921 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:59:13.682928 systemd[1]: Started systemd-ask-password-console.path. Sep 5 23:59:13.682936 systemd[1]: Reached target cryptsetup.target. Sep 5 23:59:13.682943 systemd[1]: Reached target paths.target. Sep 5 23:59:13.682949 systemd[1]: Reached target slices.target. Sep 5 23:59:13.682956 systemd[1]: Reached target swap.target. Sep 5 23:59:13.682963 systemd[1]: Reached target timers.target. Sep 5 23:59:13.682970 systemd[1]: Listening on iscsid.socket. Sep 5 23:59:13.682977 systemd[1]: Listening on iscsiuio.socket. Sep 5 23:59:13.682985 systemd[1]: Listening on systemd-journald-audit.socket. Sep 5 23:59:13.682992 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 5 23:59:13.683000 systemd[1]: Listening on systemd-journald.socket. Sep 5 23:59:13.683006 systemd[1]: Listening on systemd-networkd.socket. Sep 5 23:59:13.683013 systemd[1]: Listening on systemd-udevd-control.socket. Sep 5 23:59:13.683020 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 5 23:59:13.683027 systemd[1]: Reached target sockets.target. Sep 5 23:59:13.683034 systemd[1]: Starting kmod-static-nodes.service... Sep 5 23:59:13.683041 systemd[1]: Finished network-cleanup.service. Sep 5 23:59:13.683049 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:59:13.683056 systemd[1]: Starting systemd-journald.service... Sep 5 23:59:13.683063 systemd[1]: Starting systemd-modules-load.service... Sep 5 23:59:13.683070 systemd[1]: Starting systemd-resolved.service... Sep 5 23:59:13.683077 systemd[1]: Starting systemd-vconsole-setup.service... Sep 5 23:59:13.683084 systemd[1]: Finished kmod-static-nodes.service. Sep 5 23:59:13.683091 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:59:13.683099 kernel: audit: type=1130 audit(1757116753.678:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.683106 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 5 23:59:13.683117 systemd-journald[290]: Journal started Sep 5 23:59:13.683155 systemd-journald[290]: Runtime Journal (/run/log/journal/0c222306b32a48ba84d4ea100d43c649) is 6.0M, max 48.7M, 42.6M free. Sep 5 23:59:13.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.681886 systemd-modules-load[291]: Inserted module 'overlay' Sep 5 23:59:13.688659 systemd[1]: Started systemd-journald.service. Sep 5 23:59:13.688701 kernel: audit: type=1130 audit(1757116753.688:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.688593 systemd[1]: Finished systemd-vconsole-setup.service. Sep 5 23:59:13.691461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 5 23:59:13.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.693424 systemd[1]: Starting dracut-cmdline-ask.service... Sep 5 23:59:13.697970 kernel: audit: type=1130 audit(1757116753.691:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.697995 kernel: audit: type=1130 audit(1757116753.692:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.701730 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:59:13.705048 systemd-modules-load[291]: Inserted module 'br_netfilter' Sep 5 23:59:13.706006 kernel: Bridge firewalling registered Sep 5 23:59:13.706936 systemd-resolved[292]: Positive Trust Anchors: Sep 5 23:59:13.706950 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:59:13.706978 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 5 23:59:13.711519 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 5 23:59:13.714715 systemd[1]: Started systemd-resolved.service. Sep 5 23:59:13.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.718011 systemd[1]: Finished dracut-cmdline-ask.service. Sep 5 23:59:13.721574 kernel: audit: type=1130 audit(1757116753.714:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.721600 kernel: SCSI subsystem initialized Sep 5 23:59:13.721609 kernel: audit: type=1130 audit(1757116753.719:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.719153 systemd[1]: Reached target nss-lookup.target. Sep 5 23:59:13.722951 systemd[1]: Starting dracut-cmdline.service... Sep 5 23:59:13.727102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:59:13.727144 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:59:13.727160 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 5 23:59:13.729335 systemd-modules-load[291]: Inserted module 'dm_multipath' Sep 5 23:59:13.730273 systemd[1]: Finished systemd-modules-load.service. Sep 5 23:59:13.733615 kernel: audit: type=1130 audit(1757116753.730:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.733670 dracut-cmdline[308]: dracut-dracut-053 Sep 5 23:59:13.731669 systemd[1]: Starting systemd-sysctl.service... Sep 5 23:59:13.736378 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 5 23:59:13.740954 systemd[1]: Finished systemd-sysctl.service. Sep 5 23:59:13.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.744566 kernel: audit: type=1130 audit(1757116753.741:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.793588 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:59:13.805562 kernel: iscsi: registered transport (tcp) Sep 5 23:59:13.820565 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:59:13.820587 kernel: QLogic iSCSI HBA Driver Sep 5 23:59:13.853796 systemd[1]: Finished dracut-cmdline.service. Sep 5 23:59:13.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.855231 systemd[1]: Starting dracut-pre-udev.service... Sep 5 23:59:13.858134 kernel: audit: type=1130 audit(1757116753.853:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:13.896564 kernel: raid6: neonx8 gen() 13570 MB/s Sep 5 23:59:13.913562 kernel: raid6: neonx8 xor() 10787 MB/s Sep 5 23:59:13.930563 kernel: raid6: neonx4 gen() 13463 MB/s Sep 5 23:59:13.947559 kernel: raid6: neonx4 xor() 11140 MB/s Sep 5 23:59:13.964557 kernel: raid6: neonx2 gen() 13100 MB/s Sep 5 23:59:13.981565 kernel: raid6: neonx2 xor() 10273 MB/s Sep 5 23:59:13.998553 kernel: raid6: neonx1 gen() 10605 MB/s Sep 5 23:59:14.015555 kernel: raid6: neonx1 xor() 8775 MB/s Sep 5 23:59:14.032567 kernel: raid6: int64x8 gen() 6275 MB/s Sep 5 23:59:14.049558 kernel: raid6: int64x8 xor() 3541 MB/s Sep 5 23:59:14.066567 kernel: raid6: int64x4 gen() 7163 MB/s Sep 5 23:59:14.083563 kernel: raid6: int64x4 xor() 3843 MB/s Sep 5 23:59:14.100573 kernel: raid6: int64x2 gen() 6118 MB/s Sep 5 23:59:14.117559 kernel: raid6: int64x2 xor() 3314 MB/s Sep 5 23:59:14.134559 kernel: raid6: int64x1 gen() 5025 MB/s Sep 5 23:59:14.151933 kernel: raid6: int64x1 xor() 2645 MB/s Sep 5 23:59:14.151949 kernel: raid6: using algorithm neonx8 gen() 13570 MB/s Sep 5 23:59:14.151959 kernel: raid6: .... xor() 10787 MB/s, rmw enabled Sep 5 23:59:14.151976 kernel: raid6: using neon recovery algorithm Sep 5 23:59:14.162716 kernel: xor: measuring software checksum speed Sep 5 23:59:14.162736 kernel: 8regs : 17224 MB/sec Sep 5 23:59:14.163764 kernel: 32regs : 20728 MB/sec Sep 5 23:59:14.163775 kernel: arm64_neon : 27719 MB/sec Sep 5 23:59:14.163784 kernel: xor: using function: arm64_neon (27719 MB/sec) Sep 5 23:59:14.216563 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 5 23:59:14.226953 systemd[1]: Finished dracut-pre-udev.service. Sep 5 23:59:14.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:14.227000 audit: BPF prog-id=7 op=LOAD Sep 5 23:59:14.227000 audit: BPF prog-id=8 op=LOAD Sep 5 23:59:14.228705 systemd[1]: Starting systemd-udevd.service... Sep 5 23:59:14.242562 systemd-udevd[493]: Using default interface naming scheme 'v252'. Sep 5 23:59:14.247294 systemd[1]: Started systemd-udevd.service. Sep 5 23:59:14.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:14.250090 systemd[1]: Starting dracut-pre-trigger.service... Sep 5 23:59:14.264914 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 5 23:59:14.298748 systemd[1]: Finished dracut-pre-trigger.service. Sep 5 23:59:14.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:14.300153 systemd[1]: Starting systemd-udev-trigger.service... Sep 5 23:59:14.333421 systemd[1]: Finished systemd-udev-trigger.service. Sep 5 23:59:14.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:14.362562 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 23:59:14.366785 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:59:14.366804 kernel: GPT:9289727 != 19775487 Sep 5 23:59:14.366812 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:59:14.366821 kernel: GPT:9289727 != 19775487 Sep 5 23:59:14.366829 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:59:14.366837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:59:14.381657 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (553) Sep 5 23:59:14.384595 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 5 23:59:14.387934 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 5 23:59:14.392978 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 5 23:59:14.396284 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 5 23:59:14.397128 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 5 23:59:14.401252 systemd[1]: Starting disk-uuid.service... Sep 5 23:59:14.407212 disk-uuid[562]: Primary Header is updated. Sep 5 23:59:14.407212 disk-uuid[562]: Secondary Entries is updated. Sep 5 23:59:14.407212 disk-uuid[562]: Secondary Header is updated. Sep 5 23:59:14.410558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:59:14.413571 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:59:14.415557 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:59:15.415932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:59:15.415980 disk-uuid[563]: The operation has completed successfully. Sep 5 23:59:15.463129 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:59:15.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.463221 systemd[1]: Finished disk-uuid.service. Sep 5 23:59:15.464707 systemd[1]: Starting verity-setup.service... Sep 5 23:59:15.488583 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:59:15.518470 systemd[1]: Found device dev-mapper-usr.device. Sep 5 23:59:15.520606 systemd[1]: Mounting sysusr-usr.mount... Sep 5 23:59:15.522545 systemd[1]: Finished verity-setup.service. Sep 5 23:59:15.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.568405 systemd[1]: Mounted sysusr-usr.mount. Sep 5 23:59:15.569151 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 5 23:59:15.569971 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 5 23:59:15.571871 systemd[1]: Starting ignition-setup.service... Sep 5 23:59:15.573750 systemd[1]: Starting parse-ip-for-networkd.service... Sep 5 23:59:15.582887 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:59:15.582927 kernel: BTRFS info (device vda6): using free space tree Sep 5 23:59:15.583549 kernel: BTRFS info (device vda6): has skinny extents Sep 5 23:59:15.592384 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:59:15.604152 systemd[1]: Finished ignition-setup.service. Sep 5 23:59:15.606442 systemd[1]: Starting ignition-fetch-offline.service... Sep 5 23:59:15.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.658178 ignition[665]: Ignition 2.14.0 Sep 5 23:59:15.658203 ignition[665]: Stage: fetch-offline Sep 5 23:59:15.658227 systemd[1]: Finished parse-ip-for-networkd.service. Sep 5 23:59:15.658240 ignition[665]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:59:15.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.660000 audit: BPF prog-id=9 op=LOAD Sep 5 23:59:15.661205 systemd[1]: Starting systemd-networkd.service... Sep 5 23:59:15.658249 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:59:15.658368 ignition[665]: parsed url from cmdline: "" Sep 5 23:59:15.658371 ignition[665]: no config URL provided Sep 5 23:59:15.658375 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:59:15.658382 ignition[665]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:59:15.658399 ignition[665]: op(1): [started] loading QEMU firmware config module Sep 5 23:59:15.658405 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 23:59:15.669455 ignition[665]: op(1): [finished] loading QEMU firmware config module Sep 5 23:59:15.682494 systemd-networkd[741]: lo: Link UP Sep 5 23:59:15.682506 systemd-networkd[741]: lo: Gained carrier Sep 5 23:59:15.682936 systemd-networkd[741]: Enumeration completed Sep 5 23:59:15.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.683120 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:59:15.683244 systemd[1]: Started systemd-networkd.service. Sep 5 23:59:15.684221 systemd-networkd[741]: eth0: Link UP Sep 5 23:59:15.684225 systemd-networkd[741]: eth0: Gained carrier Sep 5 23:59:15.684481 systemd[1]: Reached target network.target. Sep 5 23:59:15.686574 systemd[1]: Starting iscsiuio.service... Sep 5 23:59:15.693699 systemd[1]: Started iscsiuio.service. Sep 5 23:59:15.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.695625 systemd[1]: Starting iscsid.service... Sep 5 23:59:15.698760 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 5 23:59:15.698760 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 5 23:59:15.698760 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 5 23:59:15.698760 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 5 23:59:15.698760 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 5 23:59:15.698760 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 5 23:59:15.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.701447 systemd[1]: Started iscsid.service. Sep 5 23:59:15.705721 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 23:59:15.707138 systemd[1]: Starting dracut-initqueue.service... Sep 5 23:59:15.716993 systemd[1]: Finished dracut-initqueue.service. Sep 5 23:59:15.717833 systemd[1]: Reached target remote-fs-pre.target. Sep 5 23:59:15.719083 systemd[1]: Reached target remote-cryptsetup.target. Sep 5 23:59:15.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.720383 systemd[1]: Reached target remote-fs.target. Sep 5 23:59:15.722424 systemd[1]: Starting dracut-pre-mount.service... Sep 5 23:59:15.730056 systemd[1]: Finished dracut-pre-mount.service. Sep 5 23:59:15.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.735289 ignition[665]: parsing config with SHA512: 9b04011d339f486e34d0b3e9a3588cfc5bd6e6d3d9d5694d245dd9bb40b0d73c3a162d9eb6bb5767944d742a031b09a1e733b2ec3372f1c5a07843efdba5c440 Sep 5 23:59:15.741569 unknown[665]: fetched base config from "system" Sep 5 23:59:15.741585 unknown[665]: fetched user config from "qemu" Sep 5 23:59:15.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.742262 ignition[665]: fetch-offline: fetch-offline passed Sep 5 23:59:15.743264 systemd[1]: Finished ignition-fetch-offline.service. Sep 5 23:59:15.742333 ignition[665]: Ignition finished successfully Sep 5 23:59:15.744068 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 23:59:15.744749 systemd[1]: Starting ignition-kargs.service... Sep 5 23:59:15.753570 ignition[761]: Ignition 2.14.0 Sep 5 23:59:15.753579 ignition[761]: Stage: kargs Sep 5 23:59:15.753674 ignition[761]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:59:15.753684 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:59:15.754525 ignition[761]: kargs: kargs passed Sep 5 23:59:15.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.755522 systemd[1]: Finished ignition-kargs.service. Sep 5 23:59:15.754581 ignition[761]: Ignition finished successfully Sep 5 23:59:15.757447 systemd[1]: Starting ignition-disks.service... Sep 5 23:59:15.763676 ignition[767]: Ignition 2.14.0 Sep 5 23:59:15.763687 ignition[767]: Stage: disks Sep 5 23:59:15.763772 ignition[767]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:59:15.763781 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:59:15.766238 systemd[1]: Finished ignition-disks.service. Sep 5 23:59:15.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.764965 ignition[767]: disks: disks passed Sep 5 23:59:15.767579 systemd[1]: Reached target initrd-root-device.target. Sep 5 23:59:15.765008 ignition[767]: Ignition finished successfully Sep 5 23:59:15.768592 systemd[1]: Reached target local-fs-pre.target. Sep 5 23:59:15.769515 systemd[1]: Reached target local-fs.target. Sep 5 23:59:15.770685 systemd[1]: Reached target sysinit.target. Sep 5 23:59:15.771664 systemd[1]: Reached target basic.target. Sep 5 23:59:15.773382 systemd[1]: Starting systemd-fsck-root.service... Sep 5 23:59:15.783805 systemd-fsck[775]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 5 23:59:15.786804 systemd[1]: Finished systemd-fsck-root.service. Sep 5 23:59:15.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.789398 systemd[1]: Mounting sysroot.mount... Sep 5 23:59:15.795577 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 5 23:59:15.795921 systemd[1]: Mounted sysroot.mount. Sep 5 23:59:15.796501 systemd[1]: Reached target initrd-root-fs.target. Sep 5 23:59:15.798301 systemd[1]: Mounting sysroot-usr.mount... Sep 5 23:59:15.799069 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 5 23:59:15.799108 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:59:15.799130 systemd[1]: Reached target ignition-diskful.target. Sep 5 23:59:15.800874 systemd[1]: Mounted sysroot-usr.mount. Sep 5 23:59:15.803410 systemd[1]: Starting initrd-setup-root.service... Sep 5 23:59:15.807427 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:59:15.811826 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:59:15.815296 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:59:15.819239 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:59:15.843585 systemd[1]: Finished initrd-setup-root.service. Sep 5 23:59:15.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.844885 systemd[1]: Starting ignition-mount.service... Sep 5 23:59:15.846064 systemd[1]: Starting sysroot-boot.service... Sep 5 23:59:15.850860 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Sep 5 23:59:15.858361 ignition[828]: INFO : Ignition 2.14.0 Sep 5 23:59:15.858361 ignition[828]: INFO : Stage: mount Sep 5 23:59:15.859627 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:59:15.859627 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:59:15.859627 ignition[828]: INFO : mount: mount passed Sep 5 23:59:15.859627 ignition[828]: INFO : Ignition finished successfully Sep 5 23:59:15.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:15.860428 systemd[1]: Finished ignition-mount.service. Sep 5 23:59:15.864773 systemd[1]: Finished sysroot-boot.service. Sep 5 23:59:15.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:16.531259 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 5 23:59:16.538160 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (836) Sep 5 23:59:16.538191 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:59:16.538201 kernel: BTRFS info (device vda6): using free space tree Sep 5 23:59:16.539550 kernel: BTRFS info (device vda6): has skinny extents Sep 5 23:59:16.541882 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 5 23:59:16.543239 systemd[1]: Starting ignition-files.service... Sep 5 23:59:16.556520 ignition[856]: INFO : Ignition 2.14.0 Sep 5 23:59:16.556520 ignition[856]: INFO : Stage: files Sep 5 23:59:16.557700 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:59:16.557700 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:59:16.557700 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:59:16.560407 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:59:16.560407 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:59:16.560407 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:59:16.560407 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:59:16.560407 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:59:16.560403 unknown[856]: wrote ssh authorized keys file for user: core Sep 5 23:59:16.566257 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:59:16.566257 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:59:16.566257 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:59:16.566257 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 5 23:59:16.624679 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 23:59:16.946733 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:59:16.946733 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:59:16.949632 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 23:59:17.276293 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 23:59:17.452685 systemd-networkd[741]: eth0: Gained IPv6LL Sep 5 23:59:17.758971 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:59:17.758971 ignition[856]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 23:59:17.762425 ignition[856]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 23:59:17.796922 ignition[856]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 23:59:17.798124 ignition[856]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 23:59:17.798124 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:59:17.798124 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:59:17.798124 ignition[856]: INFO : files: files passed Sep 5 23:59:17.798124 ignition[856]: INFO : Ignition finished successfully Sep 5 23:59:17.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.800183 systemd[1]: Finished ignition-files.service. Sep 5 23:59:17.802982 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 5 23:59:17.804223 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 5 23:59:17.809834 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 5 23:59:17.804922 systemd[1]: Starting ignition-quench.service... Sep 5 23:59:17.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.812657 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:59:17.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.810140 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 5 23:59:17.812159 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:59:17.812231 systemd[1]: Finished ignition-quench.service. Sep 5 23:59:17.813236 systemd[1]: Reached target ignition-complete.target. Sep 5 23:59:17.815691 systemd[1]: Starting initrd-parse-etc.service... Sep 5 23:59:17.827955 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:59:17.828046 systemd[1]: Finished initrd-parse-etc.service. Sep 5 23:59:17.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.829506 systemd[1]: Reached target initrd-fs.target. Sep 5 23:59:17.830524 systemd[1]: Reached target initrd.target. Sep 5 23:59:17.831705 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 5 23:59:17.832395 systemd[1]: Starting dracut-pre-pivot.service... Sep 5 23:59:17.842292 systemd[1]: Finished dracut-pre-pivot.service. Sep 5 23:59:17.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.843753 systemd[1]: Starting initrd-cleanup.service... Sep 5 23:59:17.851374 systemd[1]: Stopped target nss-lookup.target. Sep 5 23:59:17.852125 systemd[1]: Stopped target remote-cryptsetup.target. Sep 5 23:59:17.853463 systemd[1]: Stopped target timers.target. Sep 5 23:59:17.854740 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:59:17.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.854837 systemd[1]: Stopped dracut-pre-pivot.service. Sep 5 23:59:17.855987 systemd[1]: Stopped target initrd.target. Sep 5 23:59:17.857165 systemd[1]: Stopped target basic.target. Sep 5 23:59:17.858194 systemd[1]: Stopped target ignition-complete.target. Sep 5 23:59:17.859269 systemd[1]: Stopped target ignition-diskful.target. Sep 5 23:59:17.860419 systemd[1]: Stopped target initrd-root-device.target. Sep 5 23:59:17.861659 systemd[1]: Stopped target remote-fs.target. Sep 5 23:59:17.862806 systemd[1]: Stopped target remote-fs-pre.target. Sep 5 23:59:17.864028 systemd[1]: Stopped target sysinit.target. Sep 5 23:59:17.865121 systemd[1]: Stopped target local-fs.target. Sep 5 23:59:17.866304 systemd[1]: Stopped target local-fs-pre.target. Sep 5 23:59:17.867445 systemd[1]: Stopped target swap.target. Sep 5 23:59:17.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.868520 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:59:17.868627 systemd[1]: Stopped dracut-pre-mount.service. Sep 5 23:59:17.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.869837 systemd[1]: Stopped target cryptsetup.target. Sep 5 23:59:17.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.870816 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:59:17.870913 systemd[1]: Stopped dracut-initqueue.service. Sep 5 23:59:17.872190 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:59:17.872278 systemd[1]: Stopped ignition-fetch-offline.service. Sep 5 23:59:17.873415 systemd[1]: Stopped target paths.target. Sep 5 23:59:17.874463 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:59:17.877583 systemd[1]: Stopped systemd-ask-password-console.path. Sep 5 23:59:17.878364 systemd[1]: Stopped target slices.target. Sep 5 23:59:17.879365 systemd[1]: Stopped target sockets.target. Sep 5 23:59:17.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.880821 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:59:17.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.880895 systemd[1]: Closed iscsid.socket. Sep 5 23:59:17.881975 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:59:17.882065 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 5 23:59:17.883228 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:59:17.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.883311 systemd[1]: Stopped ignition-files.service. Sep 5 23:59:17.885102 systemd[1]: Stopping ignition-mount.service... Sep 5 23:59:17.886287 systemd[1]: Stopping iscsiuio.service... Sep 5 23:59:17.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.893694 ignition[897]: INFO : Ignition 2.14.0 Sep 5 23:59:17.893694 ignition[897]: INFO : Stage: umount Sep 5 23:59:17.893694 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:59:17.893694 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:59:17.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.887998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:59:17.899644 ignition[897]: INFO : umount: umount passed Sep 5 23:59:17.899644 ignition[897]: INFO : Ignition finished successfully Sep 5 23:59:17.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.888118 systemd[1]: Stopped kmod-static-nodes.service. Sep 5 23:59:17.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.890082 systemd[1]: Stopping sysroot-boot.service... Sep 5 23:59:17.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.890663 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:59:17.890765 systemd[1]: Stopped systemd-udev-trigger.service. Sep 5 23:59:17.891941 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:59:17.892026 systemd[1]: Stopped dracut-pre-trigger.service. Sep 5 23:59:17.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.894596 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 5 23:59:17.894688 systemd[1]: Stopped iscsiuio.service. Sep 5 23:59:17.896055 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:59:17.896124 systemd[1]: Stopped ignition-mount.service. Sep 5 23:59:17.897552 systemd[1]: Stopped target network.target. Sep 5 23:59:17.898852 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:59:17.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.898882 systemd[1]: Closed iscsiuio.socket. Sep 5 23:59:17.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.900077 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:59:17.900112 systemd[1]: Stopped ignition-disks.service. Sep 5 23:59:17.919000 audit: BPF prog-id=6 op=UNLOAD Sep 5 23:59:17.901502 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:59:17.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.901548 systemd[1]: Stopped ignition-kargs.service. Sep 5 23:59:17.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.903123 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:59:17.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.903159 systemd[1]: Stopped ignition-setup.service. Sep 5 23:59:17.904553 systemd[1]: Stopping systemd-networkd.service... Sep 5 23:59:17.905462 systemd[1]: Stopping systemd-resolved.service... Sep 5 23:59:17.907815 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:59:17.908296 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:59:17.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.908371 systemd[1]: Finished initrd-cleanup.service. Sep 5 23:59:17.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.913622 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:59:17.913718 systemd[1]: Stopped systemd-resolved.service. Sep 5 23:59:17.914714 systemd-networkd[741]: eth0: DHCPv6 lease lost Sep 5 23:59:17.935000 audit: BPF prog-id=9 op=UNLOAD Sep 5 23:59:17.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.916122 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:59:17.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.916207 systemd[1]: Stopped systemd-networkd.service. Sep 5 23:59:17.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.917275 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:59:17.917302 systemd[1]: Closed systemd-networkd.socket. Sep 5 23:59:17.918953 systemd[1]: Stopping network-cleanup.service... Sep 5 23:59:17.919719 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:59:17.919774 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 5 23:59:17.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.921072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:59:17.921110 systemd[1]: Stopped systemd-sysctl.service. Sep 5 23:59:17.923093 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:59:17.923132 systemd[1]: Stopped systemd-modules-load.service. Sep 5 23:59:17.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.924050 systemd[1]: Stopping systemd-udevd.service... Sep 5 23:59:17.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:17.928330 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 23:59:17.930100 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:59:17.930233 systemd[1]: Stopped systemd-udevd.service. Sep 5 23:59:17.931686 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:59:17.931771 systemd[1]: Stopped network-cleanup.service. Sep 5 23:59:17.932736 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:59:17.932773 systemd[1]: Closed systemd-udevd-control.socket. Sep 5 23:59:17.933784 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:59:17.933815 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 5 23:59:17.935207 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:59:17.935248 systemd[1]: Stopped dracut-pre-udev.service. Sep 5 23:59:17.936478 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:59:17.936518 systemd[1]: Stopped dracut-cmdline.service. Sep 5 23:59:17.939066 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:59:17.939104 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 5 23:59:17.941097 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 5 23:59:17.963000 audit: BPF prog-id=8 op=UNLOAD Sep 5 23:59:17.963000 audit: BPF prog-id=7 op=UNLOAD Sep 5 23:59:17.942545 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:59:17.942636 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 5 23:59:17.965000 audit: BPF prog-id=5 op=UNLOAD Sep 5 23:59:17.965000 audit: BPF prog-id=4 op=UNLOAD Sep 5 23:59:17.965000 audit: BPF prog-id=3 op=UNLOAD Sep 5 23:59:17.944078 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:59:17.944164 systemd[1]: Stopped sysroot-boot.service. Sep 5 23:59:17.945054 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:59:17.945089 systemd[1]: Stopped initrd-setup-root.service. Sep 5 23:59:17.949716 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:59:17.949796 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 5 23:59:17.950614 systemd[1]: Reached target initrd-switch-root.target. Sep 5 23:59:17.952372 systemd[1]: Starting initrd-switch-root.service... Sep 5 23:59:17.974621 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Sep 5 23:59:17.974653 iscsid[746]: iscsid shutting down. Sep 5 23:59:17.962173 systemd[1]: Switching root. Sep 5 23:59:17.975794 systemd-journald[290]: Journal stopped Sep 5 23:59:19.955676 kernel: SELinux: Class mctp_socket not defined in policy. Sep 5 23:59:19.955742 kernel: SELinux: Class anon_inode not defined in policy. Sep 5 23:59:19.955754 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 5 23:59:19.955765 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:59:19.955774 kernel: SELinux: policy capability open_perms=1 Sep 5 23:59:19.955787 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:59:19.955796 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:59:19.955806 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:59:19.955817 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:59:19.955826 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:59:19.955836 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:59:19.955845 kernel: kauditd_printk_skb: 70 callbacks suppressed Sep 5 23:59:19.955860 kernel: audit: type=1403 audit(1757116758.050:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:59:19.958590 systemd[1]: Successfully loaded SELinux policy in 32ms. Sep 5 23:59:19.958640 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.614ms. Sep 5 23:59:19.958654 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 5 23:59:19.958668 systemd[1]: Detected virtualization kvm. Sep 5 23:59:19.958679 systemd[1]: Detected architecture arm64. Sep 5 23:59:19.958689 systemd[1]: Detected first boot. Sep 5 23:59:19.958700 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:59:19.958710 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 5 23:59:19.958722 kernel: audit: type=1400 audit(1757116758.183:82): avc: denied { associate } for pid=948 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 5 23:59:19.958733 kernel: audit: type=1300 audit(1757116758.183:82): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001056ac a1=4000028b40 a2=4000026a40 a3=32 items=0 ppid=931 pid=948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:19.958745 kernel: audit: type=1327 audit(1757116758.183:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 5 23:59:19.958756 kernel: audit: type=1400 audit(1757116758.184:83): avc: denied { associate } for pid=948 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 5 23:59:19.958766 kernel: audit: type=1300 audit(1757116758.184:83): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000105789 a2=1ed a3=0 items=2 ppid=931 pid=948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:19.958776 kernel: audit: type=1307 audit(1757116758.184:83): cwd="/" Sep 5 23:59:19.958786 kernel: audit: type=1302 audit(1757116758.184:83): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 5 23:59:19.958797 kernel: audit: type=1302 audit(1757116758.184:83): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 5 23:59:19.958810 kernel: audit: type=1327 audit(1757116758.184:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 5 23:59:19.958822 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:59:19.958834 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 5 23:59:19.958847 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 5 23:59:19.958859 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:59:19.958870 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:59:19.958892 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 5 23:59:19.958903 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 5 23:59:19.958913 systemd[1]: Created slice system-addon\x2drun.slice. Sep 5 23:59:19.958924 systemd[1]: Created slice system-getty.slice. Sep 5 23:59:19.958937 systemd[1]: Created slice system-modprobe.slice. Sep 5 23:59:19.958948 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 5 23:59:19.958959 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 5 23:59:19.958971 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 5 23:59:19.958981 systemd[1]: Created slice user.slice. Sep 5 23:59:19.958992 systemd[1]: Started systemd-ask-password-console.path. Sep 5 23:59:19.959003 systemd[1]: Started systemd-ask-password-wall.path. Sep 5 23:59:19.959013 systemd[1]: Set up automount boot.automount. Sep 5 23:59:19.959023 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 5 23:59:19.959033 systemd[1]: Reached target integritysetup.target. Sep 5 23:59:19.959044 systemd[1]: Reached target remote-cryptsetup.target. Sep 5 23:59:19.959054 systemd[1]: Reached target remote-fs.target. Sep 5 23:59:19.959066 systemd[1]: Reached target slices.target. Sep 5 23:59:19.959076 systemd[1]: Reached target swap.target. Sep 5 23:59:19.959086 systemd[1]: Reached target torcx.target. Sep 5 23:59:19.959097 systemd[1]: Reached target veritysetup.target. Sep 5 23:59:19.959107 systemd[1]: Listening on systemd-coredump.socket. Sep 5 23:59:19.959118 systemd[1]: Listening on systemd-initctl.socket. Sep 5 23:59:19.959128 systemd[1]: Listening on systemd-journald-audit.socket. Sep 5 23:59:19.959138 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 5 23:59:19.959149 systemd[1]: Listening on systemd-journald.socket. Sep 5 23:59:19.959160 systemd[1]: Listening on systemd-networkd.socket. Sep 5 23:59:19.959172 systemd[1]: Listening on systemd-udevd-control.socket. Sep 5 23:59:19.959182 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 5 23:59:19.959192 systemd[1]: Listening on systemd-userdbd.socket. Sep 5 23:59:19.959203 systemd[1]: Mounting dev-hugepages.mount... Sep 5 23:59:19.959213 systemd[1]: Mounting dev-mqueue.mount... Sep 5 23:59:19.959223 systemd[1]: Mounting media.mount... Sep 5 23:59:19.959233 systemd[1]: Mounting sys-kernel-debug.mount... Sep 5 23:59:19.959243 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 5 23:59:19.959254 systemd[1]: Mounting tmp.mount... Sep 5 23:59:19.959265 systemd[1]: Starting flatcar-tmpfiles.service... Sep 5 23:59:19.959276 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 5 23:59:19.959286 systemd[1]: Starting kmod-static-nodes.service... Sep 5 23:59:19.959296 systemd[1]: Starting modprobe@configfs.service... Sep 5 23:59:19.959307 systemd[1]: Starting modprobe@dm_mod.service... Sep 5 23:59:19.959317 systemd[1]: Starting modprobe@drm.service... Sep 5 23:59:19.959328 systemd[1]: Starting modprobe@efi_pstore.service... Sep 5 23:59:19.959339 systemd[1]: Starting modprobe@fuse.service... Sep 5 23:59:19.959350 systemd[1]: Starting modprobe@loop.service... Sep 5 23:59:19.959362 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:59:19.959373 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 5 23:59:19.959384 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 5 23:59:19.959394 systemd[1]: Starting systemd-journald.service... Sep 5 23:59:19.959404 systemd[1]: Starting systemd-modules-load.service... Sep 5 23:59:19.959414 kernel: loop: module loaded Sep 5 23:59:19.959424 kernel: fuse: init (API version 7.34) Sep 5 23:59:19.959434 systemd[1]: Starting systemd-network-generator.service... Sep 5 23:59:19.959444 systemd[1]: Starting systemd-remount-fs.service... Sep 5 23:59:19.959456 systemd[1]: Starting systemd-udev-trigger.service... Sep 5 23:59:19.959467 systemd[1]: Mounted dev-hugepages.mount. Sep 5 23:59:19.959476 systemd[1]: Mounted dev-mqueue.mount. Sep 5 23:59:19.959487 systemd[1]: Mounted media.mount. Sep 5 23:59:19.959496 systemd[1]: Mounted sys-kernel-debug.mount. Sep 5 23:59:19.959506 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 5 23:59:19.959517 systemd[1]: Mounted tmp.mount. Sep 5 23:59:19.959530 systemd-journald[1033]: Journal started Sep 5 23:59:19.959595 systemd-journald[1033]: Runtime Journal (/run/log/journal/0c222306b32a48ba84d4ea100d43c649) is 6.0M, max 48.7M, 42.6M free. Sep 5 23:59:19.889000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 5 23:59:19.889000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 5 23:59:19.948000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 5 23:59:19.948000 audit[1033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffff4b7f620 a2=4000 a3=1 items=0 ppid=1 pid=1033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:19.948000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 5 23:59:19.960893 systemd[1]: Finished kmod-static-nodes.service. Sep 5 23:59:19.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.962558 systemd[1]: Started systemd-journald.service. Sep 5 23:59:19.963040 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:59:19.965212 systemd[1]: Finished modprobe@configfs.service. Sep 5 23:59:19.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.966165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:59:19.966388 systemd[1]: Finished modprobe@dm_mod.service. Sep 5 23:59:19.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.967282 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:59:19.967486 systemd[1]: Finished modprobe@drm.service. Sep 5 23:59:19.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.968365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:59:19.968569 systemd[1]: Finished modprobe@efi_pstore.service. Sep 5 23:59:19.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.969423 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:59:19.969721 systemd[1]: Finished modprobe@fuse.service. Sep 5 23:59:19.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.970570 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:59:19.970728 systemd[1]: Finished modprobe@loop.service. Sep 5 23:59:19.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.973264 systemd[1]: Finished systemd-modules-load.service. Sep 5 23:59:19.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.974846 systemd[1]: Finished systemd-network-generator.service. Sep 5 23:59:19.976022 systemd[1]: Finished systemd-remount-fs.service. Sep 5 23:59:19.977219 systemd[1]: Reached target network-pre.target. Sep 5 23:59:19.979188 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 5 23:59:19.981209 systemd[1]: Mounting sys-kernel-config.mount... Sep 5 23:59:19.982002 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:59:19.983492 systemd[1]: Starting systemd-hwdb-update.service... Sep 5 23:59:19.985504 systemd[1]: Starting systemd-journal-flush.service... Sep 5 23:59:19.986313 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:59:19.987560 systemd[1]: Starting systemd-random-seed.service... Sep 5 23:59:19.988452 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 5 23:59:19.989747 systemd[1]: Starting systemd-sysctl.service... Sep 5 23:59:19.992432 systemd-journald[1033]: Time spent on flushing to /var/log/journal/0c222306b32a48ba84d4ea100d43c649 is 12.252ms for 928 entries. Sep 5 23:59:19.992432 systemd-journald[1033]: System Journal (/var/log/journal/0c222306b32a48ba84d4ea100d43c649) is 8.0M, max 195.6M, 187.6M free. Sep 5 23:59:20.032220 systemd-journald[1033]: Received client request to flush runtime journal. Sep 5 23:59:19.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:19.992065 systemd[1]: Finished flatcar-tmpfiles.service. Sep 5 23:59:19.994943 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 5 23:59:19.995777 systemd[1]: Mounted sys-kernel-config.mount. Sep 5 23:59:20.033480 udevadm[1078]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 23:59:19.997644 systemd[1]: Starting systemd-sysusers.service... Sep 5 23:59:20.005364 systemd[1]: Finished systemd-udev-trigger.service. Sep 5 23:59:20.007408 systemd[1]: Starting systemd-udev-settle.service... Sep 5 23:59:20.015374 systemd[1]: Finished systemd-random-seed.service. Sep 5 23:59:20.016476 systemd[1]: Finished systemd-sysctl.service. Sep 5 23:59:20.019170 systemd[1]: Reached target first-boot-complete.target. Sep 5 23:59:20.023003 systemd[1]: Finished systemd-sysusers.service. Sep 5 23:59:20.024825 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 5 23:59:20.033305 systemd[1]: Finished systemd-journal-flush.service. Sep 5 23:59:20.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.048385 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 5 23:59:20.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.370256 systemd[1]: Finished systemd-hwdb-update.service. Sep 5 23:59:20.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.372233 systemd[1]: Starting systemd-udevd.service... Sep 5 23:59:20.388426 systemd-udevd[1089]: Using default interface naming scheme 'v252'. Sep 5 23:59:20.399490 systemd[1]: Started systemd-udevd.service. Sep 5 23:59:20.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.401861 systemd[1]: Starting systemd-networkd.service... Sep 5 23:59:20.420658 systemd[1]: Starting systemd-userdbd.service... Sep 5 23:59:20.423403 systemd[1]: Found device dev-ttyAMA0.device. Sep 5 23:59:20.457571 systemd[1]: Started systemd-userdbd.service. Sep 5 23:59:20.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.470959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 5 23:59:20.507308 systemd-networkd[1097]: lo: Link UP Sep 5 23:59:20.507319 systemd-networkd[1097]: lo: Gained carrier Sep 5 23:59:20.507698 systemd-networkd[1097]: Enumeration completed Sep 5 23:59:20.507838 systemd[1]: Started systemd-networkd.service. Sep 5 23:59:20.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.508887 systemd[1]: Finished systemd-udev-settle.service. Sep 5 23:59:20.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.510819 systemd[1]: Starting lvm2-activation-early.service... Sep 5 23:59:20.511461 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:59:20.513337 systemd-networkd[1097]: eth0: Link UP Sep 5 23:59:20.513355 systemd-networkd[1097]: eth0: Gained carrier Sep 5 23:59:20.520104 lvm[1123]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:59:20.536778 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 23:59:20.549570 systemd[1]: Finished lvm2-activation-early.service. Sep 5 23:59:20.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.550365 systemd[1]: Reached target cryptsetup.target. Sep 5 23:59:20.552265 systemd[1]: Starting lvm2-activation.service... Sep 5 23:59:20.555983 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:59:20.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.590518 systemd[1]: Finished lvm2-activation.service. Sep 5 23:59:20.591300 systemd[1]: Reached target local-fs-pre.target. Sep 5 23:59:20.592046 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:59:20.592072 systemd[1]: Reached target local-fs.target. Sep 5 23:59:20.592717 systemd[1]: Reached target machines.target. Sep 5 23:59:20.594493 systemd[1]: Starting ldconfig.service... Sep 5 23:59:20.595760 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 5 23:59:20.595815 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 5 23:59:20.597096 systemd[1]: Starting systemd-boot-update.service... Sep 5 23:59:20.599013 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 5 23:59:20.601075 systemd[1]: Starting systemd-machine-id-commit.service... Sep 5 23:59:20.604020 systemd[1]: Starting systemd-sysext.service... Sep 5 23:59:20.605109 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) Sep 5 23:59:20.606324 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 5 23:59:20.616270 systemd[1]: Unmounting usr-share-oem.mount... Sep 5 23:59:20.622331 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 5 23:59:20.622751 systemd[1]: Unmounted usr-share-oem.mount. Sep 5 23:59:20.627570 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 5 23:59:20.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.678571 kernel: loop0: detected capacity change from 0 to 203944 Sep 5 23:59:20.683767 systemd[1]: Finished systemd-machine-id-commit.service. Sep 5 23:59:20.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.696557 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:59:20.699857 systemd-fsck[1141]: fsck.fat 4.2 (2021-01-31) Sep 5 23:59:20.699857 systemd-fsck[1141]: /dev/vda1: 236 files, 117310/258078 clusters Sep 5 23:59:20.701591 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 5 23:59:20.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.709585 kernel: loop1: detected capacity change from 0 to 203944 Sep 5 23:59:20.716487 (sd-sysext)[1148]: Using extensions 'kubernetes'. Sep 5 23:59:20.716811 (sd-sysext)[1148]: Merged extensions into '/usr'. Sep 5 23:59:20.733255 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 5 23:59:20.734636 systemd[1]: Starting modprobe@dm_mod.service... Sep 5 23:59:20.736341 systemd[1]: Starting modprobe@efi_pstore.service... Sep 5 23:59:20.738231 systemd[1]: Starting modprobe@loop.service... Sep 5 23:59:20.739033 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 5 23:59:20.739167 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 5 23:59:20.739904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:59:20.740055 systemd[1]: Finished modprobe@dm_mod.service. Sep 5 23:59:20.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.741439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:59:20.741584 systemd[1]: Finished modprobe@efi_pstore.service. Sep 5 23:59:20.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.742700 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:59:20.742850 systemd[1]: Finished modprobe@loop.service. Sep 5 23:59:20.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.743965 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:59:20.744064 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 5 23:59:20.805501 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:59:20.809628 systemd[1]: Finished ldconfig.service. Sep 5 23:59:20.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.952190 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:59:20.953999 systemd[1]: Mounting boot.mount... Sep 5 23:59:20.955769 systemd[1]: Mounting usr-share-oem.mount... Sep 5 23:59:20.962700 systemd[1]: Mounted boot.mount. Sep 5 23:59:20.963980 systemd[1]: Mounted usr-share-oem.mount. Sep 5 23:59:20.967756 systemd[1]: Finished systemd-sysext.service. Sep 5 23:59:20.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.970040 systemd[1]: Starting ensure-sysext.service... Sep 5 23:59:20.971907 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 5 23:59:20.973084 systemd[1]: Finished systemd-boot-update.service. Sep 5 23:59:20.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:20.975965 systemd[1]: Reloading. Sep 5 23:59:20.986632 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 5 23:59:20.987339 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:59:20.988595 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:59:21.010825 /usr/lib/systemd/system-generators/torcx-generator[1186]: time="2025-09-05T23:59:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 5 23:59:21.013388 /usr/lib/systemd/system-generators/torcx-generator[1186]: time="2025-09-05T23:59:21Z" level=info msg="torcx already run" Sep 5 23:59:21.080019 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 5 23:59:21.080041 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 5 23:59:21.096272 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:59:21.145674 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 5 23:59:21.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.148718 systemd[1]: Starting audit-rules.service... Sep 5 23:59:21.150504 systemd[1]: Starting clean-ca-certificates.service... Sep 5 23:59:21.152649 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 5 23:59:21.154992 systemd[1]: Starting systemd-resolved.service... Sep 5 23:59:21.157063 systemd[1]: Starting systemd-timesyncd.service... Sep 5 23:59:21.159310 systemd[1]: Starting systemd-update-utmp.service... Sep 5 23:59:21.160709 systemd[1]: Finished clean-ca-certificates.service. Sep 5 23:59:21.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.163651 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:59:21.163000 audit[1244]: SYSTEM_BOOT pid=1244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.167531 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.169136 systemd[1]: Starting modprobe@dm_mod.service... Sep 5 23:59:21.171244 systemd[1]: Starting modprobe@efi_pstore.service... Sep 5 23:59:21.173516 systemd[1]: Starting modprobe@loop.service... Sep 5 23:59:21.174447 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.174626 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 5 23:59:21.174757 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:59:21.175983 systemd[1]: Finished systemd-update-utmp.service. Sep 5 23:59:21.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.177338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:59:21.177478 systemd[1]: Finished modprobe@efi_pstore.service. Sep 5 23:59:21.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.179009 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:59:21.179421 systemd[1]: Finished modprobe@loop.service. Sep 5 23:59:21.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.182451 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:59:21.184375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:59:21.184644 systemd[1]: Finished modprobe@dm_mod.service. Sep 5 23:59:21.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.186996 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.188472 systemd[1]: Starting modprobe@dm_mod.service... Sep 5 23:59:21.190768 systemd[1]: Starting modprobe@efi_pstore.service... Sep 5 23:59:21.192929 systemd[1]: Starting modprobe@loop.service... Sep 5 23:59:21.195979 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.196176 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 5 23:59:21.196301 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:59:21.200079 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 5 23:59:21.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.201417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:59:21.201731 systemd[1]: Finished modprobe@dm_mod.service. Sep 5 23:59:21.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:21.204000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 5 23:59:21.204000 audit[1268]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd34d9740 a2=420 a3=0 items=0 ppid=1232 pid=1268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:21.204000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 5 23:59:21.204945 augenrules[1268]: No rules Sep 5 23:59:21.205257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:59:21.205399 systemd[1]: Finished modprobe@efi_pstore.service. Sep 5 23:59:21.206836 systemd[1]: Finished audit-rules.service. Sep 5 23:59:21.207966 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:59:21.208278 systemd[1]: Finished modprobe@loop.service. Sep 5 23:59:21.209793 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:59:21.209915 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.211775 systemd[1]: Starting systemd-update-done.service... Sep 5 23:59:21.215979 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.217529 systemd[1]: Starting modprobe@dm_mod.service... Sep 5 23:59:21.219956 systemd[1]: Starting modprobe@drm.service... Sep 5 23:59:21.222213 systemd[1]: Starting modprobe@efi_pstore.service... Sep 5 23:59:21.224681 systemd[1]: Starting modprobe@loop.service... Sep 5 23:59:21.225789 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.225975 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 5 23:59:21.226041 systemd-timesyncd[1241]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 23:59:21.226093 systemd-timesyncd[1241]: Initial clock synchronization to Fri 2025-09-05 23:59:21.155466 UTC. Sep 5 23:59:21.228720 systemd-resolved[1239]: Positive Trust Anchors: Sep 5 23:59:21.228728 systemd-resolved[1239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:59:21.228758 systemd-resolved[1239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 5 23:59:21.229326 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 5 23:59:21.230414 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:59:21.231960 systemd[1]: Started systemd-timesyncd.service. Sep 5 23:59:21.233765 systemd[1]: Finished systemd-update-done.service. Sep 5 23:59:21.235097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:59:21.235262 systemd[1]: Finished modprobe@dm_mod.service. Sep 5 23:59:21.236488 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:59:21.236659 systemd[1]: Finished modprobe@drm.service. Sep 5 23:59:21.237782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:59:21.237954 systemd[1]: Finished modprobe@efi_pstore.service. Sep 5 23:59:21.239134 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:59:21.239302 systemd[1]: Finished modprobe@loop.service. Sep 5 23:59:21.240762 systemd[1]: Reached target time-set.target. Sep 5 23:59:21.241493 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:59:21.241525 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.243058 systemd[1]: Finished ensure-sysext.service. Sep 5 23:59:21.245858 systemd-resolved[1239]: Defaulting to hostname 'linux'. Sep 5 23:59:21.247641 systemd[1]: Started systemd-resolved.service. Sep 5 23:59:21.248363 systemd[1]: Reached target network.target. Sep 5 23:59:21.249187 systemd[1]: Reached target nss-lookup.target. Sep 5 23:59:21.249833 systemd[1]: Reached target sysinit.target. Sep 5 23:59:21.250499 systemd[1]: Started motdgen.path. Sep 5 23:59:21.251136 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 5 23:59:21.252206 systemd[1]: Started logrotate.timer. Sep 5 23:59:21.253098 systemd[1]: Started mdadm.timer. Sep 5 23:59:21.253652 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 5 23:59:21.254303 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:59:21.254333 systemd[1]: Reached target paths.target. Sep 5 23:59:21.254956 systemd[1]: Reached target timers.target. Sep 5 23:59:21.256168 systemd[1]: Listening on dbus.socket. Sep 5 23:59:21.258121 systemd[1]: Starting docker.socket... Sep 5 23:59:21.260128 systemd[1]: Listening on sshd.socket. Sep 5 23:59:21.261006 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 5 23:59:21.261376 systemd[1]: Listening on docker.socket. Sep 5 23:59:21.262098 systemd[1]: Reached target sockets.target. Sep 5 23:59:21.262731 systemd[1]: Reached target basic.target. Sep 5 23:59:21.263493 systemd[1]: System is tainted: cgroupsv1 Sep 5 23:59:21.263585 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.263618 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 5 23:59:21.264842 systemd[1]: Starting containerd.service... Sep 5 23:59:21.266769 systemd[1]: Starting dbus.service... Sep 5 23:59:21.268618 systemd[1]: Starting enable-oem-cloudinit.service... Sep 5 23:59:21.270637 systemd[1]: Starting extend-filesystems.service... Sep 5 23:59:21.274280 jq[1294]: false Sep 5 23:59:21.271428 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 5 23:59:21.272908 systemd[1]: Starting motdgen.service... Sep 5 23:59:21.274775 systemd[1]: Starting prepare-helm.service... Sep 5 23:59:21.276767 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 5 23:59:21.278767 systemd[1]: Starting sshd-keygen.service... Sep 5 23:59:21.281855 systemd[1]: Starting systemd-logind.service... Sep 5 23:59:21.282527 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 5 23:59:21.282663 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:59:21.284013 systemd[1]: Starting update-engine.service... Sep 5 23:59:21.285884 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 5 23:59:21.288160 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:59:21.291746 jq[1313]: true Sep 5 23:59:21.288412 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 5 23:59:21.289490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:59:21.289795 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 5 23:59:21.301752 extend-filesystems[1295]: Found loop1 Sep 5 23:59:21.301752 extend-filesystems[1295]: Found vda Sep 5 23:59:21.301752 extend-filesystems[1295]: Found vda1 Sep 5 23:59:21.301752 extend-filesystems[1295]: Found vda2 Sep 5 23:59:21.301752 extend-filesystems[1295]: Found vda3 Sep 5 23:59:21.301752 extend-filesystems[1295]: Found usr Sep 5 23:59:21.301752 extend-filesystems[1295]: Found vda4 Sep 5 23:59:21.301752 extend-filesystems[1295]: Found vda6 Sep 5 23:59:21.301752 extend-filesystems[1295]: Found vda7 Sep 5 23:59:21.301752 extend-filesystems[1295]: Found vda9 Sep 5 23:59:21.301752 extend-filesystems[1295]: Checking size of /dev/vda9 Sep 5 23:59:21.334657 tar[1317]: linux-arm64/helm Sep 5 23:59:21.315722 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:59:21.330044 dbus-daemon[1293]: [system] SELinux support is enabled Sep 5 23:59:21.335695 jq[1319]: true Sep 5 23:59:21.327347 systemd[1]: Finished motdgen.service. Sep 5 23:59:21.330328 systemd[1]: Started dbus.service. Sep 5 23:59:21.335522 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:59:21.335561 systemd[1]: Reached target system-config.target. Sep 5 23:59:21.336458 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:59:21.336486 systemd[1]: Reached target user-config.target. Sep 5 23:59:21.338403 extend-filesystems[1295]: Resized partition /dev/vda9 Sep 5 23:59:21.344890 extend-filesystems[1344]: resize2fs 1.46.5 (30-Dec-2021) Sep 5 23:59:21.347564 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 23:59:21.371317 update_engine[1312]: I0905 23:59:21.371038 1312 main.cc:92] Flatcar Update Engine starting Sep 5 23:59:21.383774 update_engine[1312]: I0905 23:59:21.374465 1312 update_check_scheduler.cc:74] Next update check in 5m51s Sep 5 23:59:21.374191 systemd[1]: Started update-engine.service. Sep 5 23:59:21.377060 systemd[1]: Started locksmithd.service. Sep 5 23:59:21.382369 systemd-logind[1310]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:59:21.382656 systemd-logind[1310]: New seat seat0. Sep 5 23:59:21.391120 systemd[1]: Started systemd-logind.service. Sep 5 23:59:21.395565 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 23:59:21.415555 env[1322]: time="2025-09-05T23:59:21.412996160Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 5 23:59:21.415864 extend-filesystems[1344]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 23:59:21.415864 extend-filesystems[1344]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 23:59:21.415864 extend-filesystems[1344]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 23:59:21.422380 extend-filesystems[1295]: Resized filesystem in /dev/vda9 Sep 5 23:59:21.423356 bash[1353]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:59:21.416978 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:59:21.417217 systemd[1]: Finished extend-filesystems.service. Sep 5 23:59:21.419240 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 5 23:59:21.440322 env[1322]: time="2025-09-05T23:59:21.440275200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:59:21.440530 env[1322]: time="2025-09-05T23:59:21.440435600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:59:21.441674 env[1322]: time="2025-09-05T23:59:21.441641360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:59:21.441674 env[1322]: time="2025-09-05T23:59:21.441674520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:59:21.441969 env[1322]: time="2025-09-05T23:59:21.441936600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:59:21.441969 env[1322]: time="2025-09-05T23:59:21.441959080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:59:21.442116 env[1322]: time="2025-09-05T23:59:21.441972800Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 5 23:59:21.442116 env[1322]: time="2025-09-05T23:59:21.441982720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:59:21.442116 env[1322]: time="2025-09-05T23:59:21.442055840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:59:21.442276 env[1322]: time="2025-09-05T23:59:21.442246640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:59:21.442436 env[1322]: time="2025-09-05T23:59:21.442412840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:59:21.442436 env[1322]: time="2025-09-05T23:59:21.442433200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:59:21.442657 env[1322]: time="2025-09-05T23:59:21.442488880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 5 23:59:21.442657 env[1322]: time="2025-09-05T23:59:21.442500040Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:59:21.445515 env[1322]: time="2025-09-05T23:59:21.445488680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:59:21.445601 env[1322]: time="2025-09-05T23:59:21.445520920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:59:21.445601 env[1322]: time="2025-09-05T23:59:21.445534920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:59:21.445601 env[1322]: time="2025-09-05T23:59:21.445582720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:59:21.445601 env[1322]: time="2025-09-05T23:59:21.445598440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:59:21.445690 env[1322]: time="2025-09-05T23:59:21.445612640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:59:21.445690 env[1322]: time="2025-09-05T23:59:21.445624920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:59:21.445997 env[1322]: time="2025-09-05T23:59:21.445975440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:59:21.446045 env[1322]: time="2025-09-05T23:59:21.446002600Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 5 23:59:21.446045 env[1322]: time="2025-09-05T23:59:21.446016160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:59:21.446045 env[1322]: time="2025-09-05T23:59:21.446028040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:59:21.446045 env[1322]: time="2025-09-05T23:59:21.446039800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:59:21.446170 env[1322]: time="2025-09-05T23:59:21.446150000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:59:21.446243 env[1322]: time="2025-09-05T23:59:21.446230160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:59:21.446523 env[1322]: time="2025-09-05T23:59:21.446505160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:59:21.446662 env[1322]: time="2025-09-05T23:59:21.446532680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446662 env[1322]: time="2025-09-05T23:59:21.446608640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:59:21.446754 env[1322]: time="2025-09-05T23:59:21.446732520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446754 env[1322]: time="2025-09-05T23:59:21.446748920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446814 env[1322]: time="2025-09-05T23:59:21.446761960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446814 env[1322]: time="2025-09-05T23:59:21.446773880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446814 env[1322]: time="2025-09-05T23:59:21.446784920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446814 env[1322]: time="2025-09-05T23:59:21.446796400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446814 env[1322]: time="2025-09-05T23:59:21.446807080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446925 env[1322]: time="2025-09-05T23:59:21.446818000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446925 env[1322]: time="2025-09-05T23:59:21.446832120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:59:21.446971 env[1322]: time="2025-09-05T23:59:21.446960120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446999 env[1322]: time="2025-09-05T23:59:21.446978120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.446999 env[1322]: time="2025-09-05T23:59:21.446990560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.447047 env[1322]: time="2025-09-05T23:59:21.447002240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:59:21.447047 env[1322]: time="2025-09-05T23:59:21.447016600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 5 23:59:21.447047 env[1322]: time="2025-09-05T23:59:21.447028240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:59:21.447047 env[1322]: time="2025-09-05T23:59:21.447044280Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 5 23:59:21.447128 env[1322]: time="2025-09-05T23:59:21.447077160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:59:21.447313 env[1322]: time="2025-09-05T23:59:21.447261120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:59:21.447998 env[1322]: time="2025-09-05T23:59:21.447322960Z" level=info msg="Connect containerd service" Sep 5 23:59:21.447998 env[1322]: time="2025-09-05T23:59:21.447354960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:59:21.447998 env[1322]: time="2025-09-05T23:59:21.447983640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:59:21.448300 env[1322]: time="2025-09-05T23:59:21.448279320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:59:21.448331 env[1322]: time="2025-09-05T23:59:21.448279000Z" level=info msg="Start subscribing containerd event" Sep 5 23:59:21.448331 env[1322]: time="2025-09-05T23:59:21.448325560Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:59:21.448371 env[1322]: time="2025-09-05T23:59:21.448336000Z" level=info msg="Start recovering state" Sep 5 23:59:21.448411 env[1322]: time="2025-09-05T23:59:21.448395680Z" level=info msg="Start event monitor" Sep 5 23:59:21.448472 env[1322]: time="2025-09-05T23:59:21.448421560Z" level=info msg="Start snapshots syncer" Sep 5 23:59:21.448472 env[1322]: time="2025-09-05T23:59:21.448433320Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:59:21.448472 env[1322]: time="2025-09-05T23:59:21.448441120Z" level=info msg="Start streaming server" Sep 5 23:59:21.448460 systemd[1]: Started containerd.service. Sep 5 23:59:21.450686 env[1322]: time="2025-09-05T23:59:21.450641360Z" level=info msg="containerd successfully booted in 0.046012s" Sep 5 23:59:21.462324 locksmithd[1354]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:59:21.725415 tar[1317]: linux-arm64/LICENSE Sep 5 23:59:21.725517 tar[1317]: linux-arm64/README.md Sep 5 23:59:21.729851 systemd[1]: Finished prepare-helm.service. Sep 5 23:59:21.740655 systemd-networkd[1097]: eth0: Gained IPv6LL Sep 5 23:59:21.742358 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 5 23:59:21.743438 systemd[1]: Reached target network-online.target. Sep 5 23:59:21.745747 systemd[1]: Starting kubelet.service... Sep 5 23:59:22.355889 systemd[1]: Started kubelet.service. Sep 5 23:59:22.751818 kubelet[1378]: E0905 23:59:22.751569 1378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:59:22.753599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:59:22.753749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:59:23.620095 sshd_keygen[1323]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:59:23.637428 systemd[1]: Finished sshd-keygen.service. Sep 5 23:59:23.639728 systemd[1]: Starting issuegen.service... Sep 5 23:59:23.644075 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:59:23.644276 systemd[1]: Finished issuegen.service. Sep 5 23:59:23.646337 systemd[1]: Starting systemd-user-sessions.service... Sep 5 23:59:23.651774 systemd[1]: Finished systemd-user-sessions.service. Sep 5 23:59:23.653738 systemd[1]: Started getty@tty1.service. Sep 5 23:59:23.655581 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 5 23:59:23.656416 systemd[1]: Reached target getty.target. Sep 5 23:59:23.657197 systemd[1]: Reached target multi-user.target. Sep 5 23:59:23.659026 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 5 23:59:23.664928 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 5 23:59:23.665137 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 5 23:59:23.666042 systemd[1]: Startup finished in 5.011s (kernel) + 5.648s (userspace) = 10.660s. Sep 5 23:59:26.099284 systemd[1]: Created slice system-sshd.slice. Sep 5 23:59:26.100389 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:38532.service. Sep 5 23:59:26.143472 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 38532 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 5 23:59:26.145441 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:59:26.154046 systemd[1]: Created slice user-500.slice. Sep 5 23:59:26.154948 systemd[1]: Starting user-runtime-dir@500.service... Sep 5 23:59:26.156948 systemd-logind[1310]: New session 1 of user core. Sep 5 23:59:26.163590 systemd[1]: Finished user-runtime-dir@500.service. Sep 5 23:59:26.165132 systemd[1]: Starting user@500.service... Sep 5 23:59:26.168335 (systemd)[1409]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:59:26.230553 systemd[1409]: Queued start job for default target default.target. Sep 5 23:59:26.230762 systemd[1409]: Reached target paths.target. Sep 5 23:59:26.230777 systemd[1409]: Reached target sockets.target. Sep 5 23:59:26.230787 systemd[1409]: Reached target timers.target. Sep 5 23:59:26.230796 systemd[1409]: Reached target basic.target. Sep 5 23:59:26.230835 systemd[1409]: Reached target default.target. Sep 5 23:59:26.230856 systemd[1409]: Startup finished in 56ms. Sep 5 23:59:26.231396 systemd[1]: Started user@500.service. Sep 5 23:59:26.232725 systemd[1]: Started session-1.scope. Sep 5 23:59:26.283421 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:38548.service. Sep 5 23:59:26.324216 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 38548 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 5 23:59:26.325671 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:59:26.330131 systemd-logind[1310]: New session 2 of user core. Sep 5 23:59:26.330855 systemd[1]: Started session-2.scope. Sep 5 23:59:26.386435 sshd[1418]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:26.387329 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:38560.service. Sep 5 23:59:26.389844 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:38548.service: Deactivated successfully. Sep 5 23:59:26.390201 systemd-logind[1310]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:59:26.390512 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:59:26.391406 systemd-logind[1310]: Removed session 2. Sep 5 23:59:26.426122 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 38560 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 5 23:59:26.427214 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:59:26.430583 systemd-logind[1310]: New session 3 of user core. Sep 5 23:59:26.431329 systemd[1]: Started session-3.scope. Sep 5 23:59:26.480347 sshd[1423]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:26.483104 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:38574.service. Sep 5 23:59:26.483511 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:38560.service: Deactivated successfully. Sep 5 23:59:26.484463 systemd-logind[1310]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:59:26.484509 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:59:26.485149 systemd-logind[1310]: Removed session 3. Sep 5 23:59:26.522363 sshd[1430]: Accepted publickey for core from 10.0.0.1 port 38574 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 5 23:59:26.523456 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:59:26.526421 systemd-logind[1310]: New session 4 of user core. Sep 5 23:59:26.527173 systemd[1]: Started session-4.scope. Sep 5 23:59:26.578595 sshd[1430]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:26.580466 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:38580.service. Sep 5 23:59:26.581278 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:38574.service: Deactivated successfully. Sep 5 23:59:26.582206 systemd-logind[1310]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:59:26.582368 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:59:26.583199 systemd-logind[1310]: Removed session 4. Sep 5 23:59:26.618790 sshd[1437]: Accepted publickey for core from 10.0.0.1 port 38580 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 5 23:59:26.619812 sshd[1437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:59:26.622694 systemd-logind[1310]: New session 5 of user core. Sep 5 23:59:26.623553 systemd[1]: Started session-5.scope. Sep 5 23:59:26.687642 sudo[1443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:59:26.688166 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 5 23:59:26.703689 dbus-daemon[1293]: avc: received setenforce notice (enforcing=1) Sep 5 23:59:26.705314 sudo[1443]: pam_unix(sudo:session): session closed for user root Sep 5 23:59:26.707326 sshd[1437]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:26.709732 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:38580.service: Deactivated successfully. Sep 5 23:59:26.710583 systemd-logind[1310]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:59:26.711849 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:38590.service. Sep 5 23:59:26.712466 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:59:26.712997 systemd-logind[1310]: Removed session 5. Sep 5 23:59:26.749508 sshd[1447]: Accepted publickey for core from 10.0.0.1 port 38590 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 5 23:59:26.750642 sshd[1447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:59:26.753588 systemd-logind[1310]: New session 6 of user core. Sep 5 23:59:26.754414 systemd[1]: Started session-6.scope. Sep 5 23:59:26.806151 sudo[1452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:59:26.806676 sudo[1452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 5 23:59:26.809237 sudo[1452]: pam_unix(sudo:session): session closed for user root Sep 5 23:59:26.813249 sudo[1451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:59:26.813453 sudo[1451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 5 23:59:26.820984 systemd[1]: Stopping audit-rules.service... Sep 5 23:59:26.821000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 5 23:59:26.822463 auditctl[1455]: No rules Sep 5 23:59:26.822682 kernel: kauditd_printk_skb: 64 callbacks suppressed Sep 5 23:59:26.822724 kernel: audit: type=1305 audit(1757116766.821:144): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 5 23:59:26.822499 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:59:26.822720 systemd[1]: Stopped audit-rules.service. Sep 5 23:59:26.821000 audit[1455]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffff7cb2e0 a2=420 a3=0 items=0 ppid=1 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:26.826814 kernel: audit: type=1300 audit(1757116766.821:144): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffff7cb2e0 a2=420 a3=0 items=0 ppid=1 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:26.826886 kernel: audit: type=1327 audit(1757116766.821:144): proctitle=2F7362696E2F617564697463746C002D44 Sep 5 23:59:26.821000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 5 23:59:26.824055 systemd[1]: Starting audit-rules.service... Sep 5 23:59:26.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.829568 kernel: audit: type=1131 audit(1757116766.822:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.841270 augenrules[1473]: No rules Sep 5 23:59:26.841914 systemd[1]: Finished audit-rules.service. Sep 5 23:59:26.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.842636 sudo[1451]: pam_unix(sudo:session): session closed for user root Sep 5 23:59:26.842000 audit[1451]: USER_END pid=1451 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.847406 kernel: audit: type=1130 audit(1757116766.841:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.847455 kernel: audit: type=1106 audit(1757116766.842:147): pid=1451 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.847471 kernel: audit: type=1104 audit(1757116766.842:148): pid=1451 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.842000 audit[1451]: CRED_DISP pid=1451 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.847321 sshd[1447]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:26.849000 audit[1447]: USER_END pid=1447 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 5 23:59:26.850027 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:38592.service. Sep 5 23:59:26.851227 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:38590.service: Deactivated successfully. Sep 5 23:59:26.852283 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:59:26.852308 systemd-logind[1310]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:59:26.853139 systemd-logind[1310]: Removed session 6. Sep 5 23:59:26.849000 audit[1447]: CRED_DISP pid=1447 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 5 23:59:26.855650 kernel: audit: type=1106 audit(1757116766.849:149): pid=1447 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 5 23:59:26.855701 kernel: audit: type=1104 audit(1757116766.849:150): pid=1447 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 5 23:59:26.855728 kernel: audit: type=1130 audit(1757116766.849:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.34:22-10.0.0.1:38592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.34:22-10.0.0.1:38592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.34:22-10.0.0.1:38590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.888000 audit[1478]: USER_ACCT pid=1478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 5 23:59:26.890360 sshd[1478]: Accepted publickey for core from 10.0.0.1 port 38592 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 5 23:59:26.889000 audit[1478]: CRED_ACQ pid=1478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 5 23:59:26.889000 audit[1478]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffffe72850 a2=3 a3=1 items=0 ppid=1 pid=1478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:26.889000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 5 23:59:26.891280 sshd[1478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:59:26.894185 systemd-logind[1310]: New session 7 of user core. Sep 5 23:59:26.894884 systemd[1]: Started session-7.scope. Sep 5 23:59:26.896000 audit[1478]: USER_START pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 5 23:59:26.897000 audit[1483]: CRED_ACQ pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 5 23:59:26.943000 audit[1484]: USER_ACCT pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.943000 audit[1484]: CRED_REFR pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.945179 sudo[1484]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:59:26.945380 sudo[1484]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 5 23:59:26.946000 audit[1484]: USER_START pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 5 23:59:26.984333 systemd[1]: Starting docker.service... Sep 5 23:59:27.037375 env[1496]: time="2025-09-05T23:59:27.037324140Z" level=info msg="Starting up" Sep 5 23:59:27.038855 env[1496]: time="2025-09-05T23:59:27.038834443Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 5 23:59:27.038855 env[1496]: time="2025-09-05T23:59:27.038854592Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 5 23:59:27.038924 env[1496]: time="2025-09-05T23:59:27.038874302Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 5 23:59:27.038924 env[1496]: time="2025-09-05T23:59:27.038884138Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 5 23:59:27.042267 env[1496]: time="2025-09-05T23:59:27.042243328Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 5 23:59:27.042267 env[1496]: time="2025-09-05T23:59:27.042264193Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 5 23:59:27.042363 env[1496]: time="2025-09-05T23:59:27.042277692Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 5 23:59:27.042363 env[1496]: time="2025-09-05T23:59:27.042286810Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 5 23:59:27.233030 env[1496]: time="2025-09-05T23:59:27.232940716Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 5 23:59:27.233359 env[1496]: time="2025-09-05T23:59:27.233300881Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 5 23:59:27.233595 env[1496]: time="2025-09-05T23:59:27.233577904Z" level=info msg="Loading containers: start." Sep 5 23:59:27.278000 audit[1530]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.278000 audit[1530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe174eb80 a2=0 a3=1 items=0 ppid=1496 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.278000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 5 23:59:27.280000 audit[1532]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.280000 audit[1532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc7cf32e0 a2=0 a3=1 items=0 ppid=1496 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.280000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 5 23:59:27.282000 audit[1534]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.282000 audit[1534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffee59ab80 a2=0 a3=1 items=0 ppid=1496 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.282000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 5 23:59:27.284000 audit[1536]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.284000 audit[1536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffea264080 a2=0 a3=1 items=0 ppid=1496 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.284000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 5 23:59:27.286000 audit[1538]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.286000 audit[1538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd1a51580 a2=0 a3=1 items=0 ppid=1496 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.286000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 5 23:59:27.314000 audit[1543]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.314000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff4f4dcf0 a2=0 a3=1 items=0 ppid=1496 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.314000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 5 23:59:27.321000 audit[1545]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.321000 audit[1545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc80a1560 a2=0 a3=1 items=0 ppid=1496 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.321000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 5 23:59:27.323000 audit[1547]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.323000 audit[1547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffcedaf00 a2=0 a3=1 items=0 ppid=1496 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.323000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 5 23:59:27.325000 audit[1549]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.325000 audit[1549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffe6b60440 a2=0 a3=1 items=0 ppid=1496 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.325000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 5 23:59:27.332000 audit[1553]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.332000 audit[1553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc7922540 a2=0 a3=1 items=0 ppid=1496 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.332000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 5 23:59:27.346000 audit[1554]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.346000 audit[1554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd6939070 a2=0 a3=1 items=0 ppid=1496 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.346000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 5 23:59:27.356567 kernel: Initializing XFRM netlink socket Sep 5 23:59:27.380039 env[1496]: time="2025-09-05T23:59:27.380003000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 5 23:59:27.396000 audit[1562]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.396000 audit[1562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffe3ba3450 a2=0 a3=1 items=0 ppid=1496 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.396000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 5 23:59:27.415000 audit[1565]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.415000 audit[1565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd46f50f0 a2=0 a3=1 items=0 ppid=1496 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.415000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 5 23:59:27.418000 audit[1568]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.418000 audit[1568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffebafa7e0 a2=0 a3=1 items=0 ppid=1496 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.418000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 5 23:59:27.420000 audit[1570]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.420000 audit[1570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc54c1fe0 a2=0 a3=1 items=0 ppid=1496 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.420000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 5 23:59:27.422000 audit[1572]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.422000 audit[1572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffdd77bb20 a2=0 a3=1 items=0 ppid=1496 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.422000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 5 23:59:27.424000 audit[1574]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.424000 audit[1574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff7ef6c10 a2=0 a3=1 items=0 ppid=1496 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.424000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 5 23:59:27.425000 audit[1576]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.425000 audit[1576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffcaf07630 a2=0 a3=1 items=0 ppid=1496 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.425000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 5 23:59:27.432000 audit[1579]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.432000 audit[1579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffe67746d0 a2=0 a3=1 items=0 ppid=1496 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.432000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 5 23:59:27.434000 audit[1581]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.434000 audit[1581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc796b300 a2=0 a3=1 items=0 ppid=1496 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.434000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 5 23:59:27.436000 audit[1583]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.436000 audit[1583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd8608830 a2=0 a3=1 items=0 ppid=1496 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.436000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 5 23:59:27.440000 audit[1585]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.440000 audit[1585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd428aac0 a2=0 a3=1 items=0 ppid=1496 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.440000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 5 23:59:27.441708 systemd-networkd[1097]: docker0: Link UP Sep 5 23:59:27.452000 audit[1589]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.452000 audit[1589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe61e54a0 a2=0 a3=1 items=0 ppid=1496 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.452000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 5 23:59:27.466000 audit[1590]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:27.466000 audit[1590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe8c62ab0 a2=0 a3=1 items=0 ppid=1496 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:27.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 5 23:59:27.467939 env[1496]: time="2025-09-05T23:59:27.467131374Z" level=info msg="Loading containers: done." Sep 5 23:59:27.482856 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck548884283-merged.mount: Deactivated successfully. Sep 5 23:59:27.494597 env[1496]: time="2025-09-05T23:59:27.494274031Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:59:27.494597 env[1496]: time="2025-09-05T23:59:27.494557703Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 5 23:59:27.495126 env[1496]: time="2025-09-05T23:59:27.494905484Z" level=info msg="Daemon has completed initialization" Sep 5 23:59:27.511783 systemd[1]: Started docker.service. Sep 5 23:59:27.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:27.519297 env[1496]: time="2025-09-05T23:59:27.519184024Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:59:28.150650 env[1322]: time="2025-09-05T23:59:28.150610762Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 23:59:28.701060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943327735.mount: Deactivated successfully. Sep 5 23:59:29.904208 env[1322]: time="2025-09-05T23:59:29.904158023Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:29.905804 env[1322]: time="2025-09-05T23:59:29.905766479Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:29.908022 env[1322]: time="2025-09-05T23:59:29.907995499Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:29.909568 env[1322]: time="2025-09-05T23:59:29.909543684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:29.910419 env[1322]: time="2025-09-05T23:59:29.910392417Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 5 23:59:29.911643 env[1322]: time="2025-09-05T23:59:29.911560960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 23:59:31.215404 env[1322]: time="2025-09-05T23:59:31.215357921Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:31.216947 env[1322]: time="2025-09-05T23:59:31.216908841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:31.218656 env[1322]: time="2025-09-05T23:59:31.218614511Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:31.221349 env[1322]: time="2025-09-05T23:59:31.221316015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:31.221855 env[1322]: time="2025-09-05T23:59:31.221821034Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 5 23:59:31.223244 env[1322]: time="2025-09-05T23:59:31.223215370Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 23:59:32.441319 env[1322]: time="2025-09-05T23:59:32.441272787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:32.442559 env[1322]: time="2025-09-05T23:59:32.442512546Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:32.444388 env[1322]: time="2025-09-05T23:59:32.444358418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:32.446093 env[1322]: time="2025-09-05T23:59:32.446064334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:32.446870 env[1322]: time="2025-09-05T23:59:32.446840890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 5 23:59:32.447444 env[1322]: time="2025-09-05T23:59:32.447415595Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 23:59:32.946106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:59:32.948098 kernel: kauditd_printk_skb: 84 callbacks suppressed Sep 5 23:59:32.948134 kernel: audit: type=1130 audit(1757116772.945:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:32.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:32.946271 systemd[1]: Stopped kubelet.service. Sep 5 23:59:32.947733 systemd[1]: Starting kubelet.service... Sep 5 23:59:32.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:32.952743 kernel: audit: type=1131 audit(1757116772.945:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:33.042054 systemd[1]: Started kubelet.service. Sep 5 23:59:33.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:33.045551 kernel: audit: type=1130 audit(1757116773.041:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:33.082280 kubelet[1637]: E0905 23:59:33.082218 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:59:33.084588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:59:33.084727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:59:33.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 5 23:59:33.087559 kernel: audit: type=1131 audit(1757116773.084:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 5 23:59:33.814974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270420653.mount: Deactivated successfully. Sep 5 23:59:34.386513 env[1322]: time="2025-09-05T23:59:34.386452257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:34.387603 env[1322]: time="2025-09-05T23:59:34.387576458Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:34.388934 env[1322]: time="2025-09-05T23:59:34.388909169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:34.390110 env[1322]: time="2025-09-05T23:59:34.390077212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:34.390625 env[1322]: time="2025-09-05T23:59:34.390600881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 23:59:34.391105 env[1322]: time="2025-09-05T23:59:34.391027722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 23:59:34.913025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3801644754.mount: Deactivated successfully. Sep 5 23:59:35.742717 env[1322]: time="2025-09-05T23:59:35.742657583Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:35.744594 env[1322]: time="2025-09-05T23:59:35.744558467Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:35.746481 env[1322]: time="2025-09-05T23:59:35.746451802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:35.749084 env[1322]: time="2025-09-05T23:59:35.749051159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:35.749867 env[1322]: time="2025-09-05T23:59:35.749830427Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 23:59:35.750532 env[1322]: time="2025-09-05T23:59:35.750505657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:59:36.196101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987349863.mount: Deactivated successfully. Sep 5 23:59:36.201081 env[1322]: time="2025-09-05T23:59:36.201034648Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:36.202851 env[1322]: time="2025-09-05T23:59:36.202815145Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:36.204430 env[1322]: time="2025-09-05T23:59:36.204393678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:36.205783 env[1322]: time="2025-09-05T23:59:36.205754546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:36.207057 env[1322]: time="2025-09-05T23:59:36.207025936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:59:36.207693 env[1322]: time="2025-09-05T23:59:36.207655360Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 23:59:36.685370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257341488.mount: Deactivated successfully. Sep 5 23:59:38.805283 env[1322]: time="2025-09-05T23:59:38.805225660Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:38.806809 env[1322]: time="2025-09-05T23:59:38.806777165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:38.808531 env[1322]: time="2025-09-05T23:59:38.808499811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:38.811099 env[1322]: time="2025-09-05T23:59:38.811075009Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:38.811985 env[1322]: time="2025-09-05T23:59:38.811956731Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 5 23:59:43.196128 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:59:43.196299 systemd[1]: Stopped kubelet.service. Sep 5 23:59:43.197813 systemd[1]: Starting kubelet.service... Sep 5 23:59:43.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.202469 kernel: audit: type=1130 audit(1757116783.195:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.202557 kernel: audit: type=1131 audit(1757116783.195:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.288906 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 23:59:43.288977 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 23:59:43.289250 systemd[1]: Stopped kubelet.service. Sep 5 23:59:43.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 5 23:59:43.291390 systemd[1]: Starting kubelet.service... Sep 5 23:59:43.292621 kernel: audit: type=1130 audit(1757116783.288:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 5 23:59:43.315512 systemd[1]: Reloading. Sep 5 23:59:43.364215 /usr/lib/systemd/system-generators/torcx-generator[1700]: time="2025-09-05T23:59:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 5 23:59:43.364247 /usr/lib/systemd/system-generators/torcx-generator[1700]: time="2025-09-05T23:59:43Z" level=info msg="torcx already run" Sep 5 23:59:43.516824 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 5 23:59:43.517054 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 5 23:59:43.540323 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:59:43.618453 systemd[1]: Started kubelet.service. Sep 5 23:59:43.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.621571 kernel: audit: type=1130 audit(1757116783.618:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.622949 systemd[1]: Stopping kubelet.service... Sep 5 23:59:43.625349 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:59:43.625667 systemd[1]: Stopped kubelet.service. Sep 5 23:59:43.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.628569 kernel: audit: type=1131 audit(1757116783.625:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.630257 systemd[1]: Starting kubelet.service... Sep 5 23:59:43.729569 systemd[1]: Started kubelet.service. Sep 5 23:59:43.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.733552 kernel: audit: type=1130 audit(1757116783.729:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:43.771401 kubelet[1761]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:59:43.771401 kubelet[1761]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:59:43.771401 kubelet[1761]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:59:43.771401 kubelet[1761]: I0905 23:59:43.771051 1761 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:59:44.479074 kubelet[1761]: I0905 23:59:44.479013 1761 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:59:44.479074 kubelet[1761]: I0905 23:59:44.479049 1761 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:59:44.479358 kubelet[1761]: I0905 23:59:44.479327 1761 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:59:44.500726 kubelet[1761]: I0905 23:59:44.500687 1761 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:59:44.501617 kubelet[1761]: E0905 23:59:44.501263 1761 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:59:44.507250 kubelet[1761]: E0905 23:59:44.507189 1761 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:59:44.507250 kubelet[1761]: I0905 23:59:44.507250 1761 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:59:44.512608 kubelet[1761]: I0905 23:59:44.512579 1761 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:59:44.513625 kubelet[1761]: I0905 23:59:44.513605 1761 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:59:44.513791 kubelet[1761]: I0905 23:59:44.513763 1761 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:59:44.513956 kubelet[1761]: I0905 23:59:44.513793 1761 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 23:59:44.514039 kubelet[1761]: I0905 23:59:44.514029 1761 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:59:44.514067 kubelet[1761]: I0905 23:59:44.514041 1761 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:59:44.514273 kubelet[1761]: I0905 23:59:44.514262 1761 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:59:44.516454 kubelet[1761]: I0905 23:59:44.516433 1761 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:59:44.516514 kubelet[1761]: I0905 23:59:44.516458 1761 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:59:44.516514 kubelet[1761]: I0905 23:59:44.516484 1761 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:59:44.516589 kubelet[1761]: I0905 23:59:44.516571 1761 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:59:44.547489 kubelet[1761]: W0905 23:59:44.547425 1761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 5 23:59:44.547588 kubelet[1761]: E0905 23:59:44.547499 1761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:59:44.550835 kubelet[1761]: I0905 23:59:44.550814 1761 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 5 23:59:44.551795 kubelet[1761]: I0905 23:59:44.551774 1761 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:59:44.552090 kubelet[1761]: W0905 23:59:44.552077 1761 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:59:44.553257 kubelet[1761]: W0905 23:59:44.553157 1761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 5 23:59:44.553257 kubelet[1761]: E0905 23:59:44.553205 1761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:59:44.553485 kubelet[1761]: I0905 23:59:44.553464 1761 server.go:1274] "Started kubelet" Sep 5 23:59:44.553000 audit[1761]: AVC avc: denied { mac_admin } for pid=1761 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 5 23:59:44.553000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 5 23:59:44.557677 kubelet[1761]: I0905 23:59:44.555311 1761 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:59:44.557677 kubelet[1761]: I0905 23:59:44.555670 1761 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:59:44.557677 kubelet[1761]: I0905 23:59:44.555804 1761 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:59:44.558708 kernel: audit: type=1400 audit(1757116784.553:196): avc: denied { mac_admin } for pid=1761 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 5 23:59:44.558754 kernel: audit: type=1401 audit(1757116784.553:196): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 5 23:59:44.558774 kernel: audit: type=1300 audit(1757116784.553:196): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b14960 a1=4000b1e5e8 a2=4000b14930 a3=25 items=0 ppid=1 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.553000 audit[1761]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b14960 a1=4000b1e5e8 a2=4000b14930 a3=25 items=0 ppid=1 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.553000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 5 23:59:44.562326 kubelet[1761]: I0905 23:59:44.562268 1761 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 5 23:59:44.562378 kubelet[1761]: I0905 23:59:44.562333 1761 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 5 23:59:44.562441 kubelet[1761]: I0905 23:59:44.562424 1761 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:59:44.562895 kubelet[1761]: I0905 23:59:44.562872 1761 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:59:44.564217 kubelet[1761]: E0905 23:59:44.563131 1761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18628867ec2e69b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 23:59:44.553388467 +0000 UTC m=+0.819449249,LastTimestamp:2025-09-05 23:59:44.553388467 +0000 UTC m=+0.819449249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 23:59:44.564669 kubelet[1761]: I0905 23:59:44.564641 1761 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:59:44.564892 kernel: audit: type=1327 audit(1757116784.553:196): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 5 23:59:44.561000 audit[1761]: AVC avc: denied { mac_admin } for pid=1761 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 5 23:59:44.561000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 5 23:59:44.561000 audit[1761]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000114900 a1=400073c330 a2=4000641d40 a3=25 items=0 ppid=1 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.561000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 5 23:59:44.567141 kubelet[1761]: I0905 23:59:44.567118 1761 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:59:44.567721 kubelet[1761]: I0905 23:59:44.567396 1761 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:59:44.567721 kubelet[1761]: I0905 23:59:44.567642 1761 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:59:44.567815 kubelet[1761]: I0905 23:59:44.567773 1761 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:59:44.567950 kubelet[1761]: I0905 23:59:44.567921 1761 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:59:44.568284 kubelet[1761]: W0905 23:59:44.568245 1761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 5 23:59:44.568345 kubelet[1761]: E0905 23:59:44.568302 1761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:59:44.568709 kubelet[1761]: E0905 23:59:44.568686 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:59:44.568803 kubelet[1761]: E0905 23:59:44.568785 1761 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:59:44.569223 kubelet[1761]: E0905 23:59:44.569184 1761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Sep 5 23:59:44.570960 kubelet[1761]: I0905 23:59:44.570922 1761 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:59:44.569000 audit[1775]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:44.569000 audit[1775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff8416d80 a2=0 a3=1 items=0 ppid=1761 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 5 23:59:44.571000 audit[1776]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:44.571000 audit[1776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbd0c2e0 a2=0 a3=1 items=0 ppid=1761 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 5 23:59:44.573000 audit[1778]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:44.573000 audit[1778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff2aae590 a2=0 a3=1 items=0 ppid=1761 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 5 23:59:44.575000 audit[1780]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:44.575000 audit[1780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc72b8de0 a2=0 a3=1 items=0 ppid=1761 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 5 23:59:44.582000 audit[1783]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:44.582000 audit[1783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe78e5290 a2=0 a3=1 items=0 ppid=1761 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 5 23:59:44.583918 kubelet[1761]: I0905 23:59:44.583865 1761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:59:44.583000 audit[1785]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:44.583000 audit[1785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe1b14390 a2=0 a3=1 items=0 ppid=1761 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 5 23:59:44.584872 kubelet[1761]: I0905 23:59:44.584851 1761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:59:44.584872 kubelet[1761]: I0905 23:59:44.584871 1761 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:59:44.584933 kubelet[1761]: I0905 23:59:44.584890 1761 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:59:44.584960 kubelet[1761]: E0905 23:59:44.584931 1761 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:59:44.584000 audit[1786]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:44.584000 audit[1786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf57c1b0 a2=0 a3=1 items=0 ppid=1761 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 5 23:59:44.585000 audit[1787]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:44.585000 audit[1787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda8f86e0 a2=0 a3=1 items=0 ppid=1761 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 5 23:59:44.586000 audit[1788]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:44.586000 audit[1788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd679d3f0 a2=0 a3=1 items=0 ppid=1761 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 5 23:59:44.587000 audit[1789]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:44.587000 audit[1789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe96320b0 a2=0 a3=1 items=0 ppid=1761 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 5 23:59:44.588000 audit[1791]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:44.588000 audit[1791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffc6c35bb0 a2=0 a3=1 items=0 ppid=1761 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 5 23:59:44.589000 audit[1793]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:44.589000 audit[1793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd8182700 a2=0 a3=1 items=0 ppid=1761 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 5 23:59:44.591516 kubelet[1761]: W0905 23:59:44.591465 1761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 5 23:59:44.591604 kubelet[1761]: E0905 23:59:44.591515 1761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:59:44.592167 kubelet[1761]: I0905 23:59:44.592132 1761 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:59:44.592167 kubelet[1761]: I0905 23:59:44.592148 1761 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:59:44.592167 kubelet[1761]: I0905 23:59:44.592167 1761 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:59:44.594475 kubelet[1761]: I0905 23:59:44.594452 1761 policy_none.go:49] "None policy: Start" Sep 5 23:59:44.594860 kubelet[1761]: I0905 23:59:44.594847 1761 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:59:44.594888 kubelet[1761]: I0905 23:59:44.594868 1761 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:59:44.599410 kubelet[1761]: I0905 23:59:44.599375 1761 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:59:44.597000 audit[1761]: AVC avc: denied { mac_admin } for pid=1761 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 5 23:59:44.597000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 5 23:59:44.597000 audit[1761]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ed0c90 a1=4000ec6918 a2=4000ed0c60 a3=25 items=0 ppid=1 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:44.597000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 5 23:59:44.599665 kubelet[1761]: I0905 23:59:44.599444 1761 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 5 23:59:44.599665 kubelet[1761]: I0905 23:59:44.599560 1761 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:59:44.599665 kubelet[1761]: I0905 23:59:44.599570 1761 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:59:44.600270 kubelet[1761]: I0905 23:59:44.600232 1761 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:59:44.601258 kubelet[1761]: E0905 23:59:44.601237 1761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 23:59:44.700809 kubelet[1761]: I0905 23:59:44.700759 1761 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:59:44.701458 kubelet[1761]: E0905 23:59:44.701423 1761 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 5 23:59:44.769978 kubelet[1761]: I0905 23:59:44.768950 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72fda1784bb2ec485f5a3be850ea794c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"72fda1784bb2ec485f5a3be850ea794c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:59:44.769978 kubelet[1761]: I0905 23:59:44.769593 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:44.769978 kubelet[1761]: I0905 23:59:44.769625 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:44.769978 kubelet[1761]: I0905 23:59:44.769642 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:44.769978 kubelet[1761]: I0905 23:59:44.769658 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:44.770182 kubelet[1761]: I0905 23:59:44.769674 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 23:59:44.770182 kubelet[1761]: I0905 23:59:44.769688 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72fda1784bb2ec485f5a3be850ea794c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"72fda1784bb2ec485f5a3be850ea794c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:59:44.770182 kubelet[1761]: I0905 23:59:44.769702 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:44.770182 kubelet[1761]: I0905 23:59:44.769724 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72fda1784bb2ec485f5a3be850ea794c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"72fda1784bb2ec485f5a3be850ea794c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:59:44.770439 kubelet[1761]: E0905 23:59:44.770378 1761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Sep 5 23:59:44.902764 kubelet[1761]: I0905 23:59:44.902734 1761 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:59:44.903423 kubelet[1761]: E0905 23:59:44.903391 1761 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 5 23:59:44.995758 kubelet[1761]: E0905 23:59:44.995712 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:44.995941 kubelet[1761]: E0905 23:59:44.995915 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:44.996284 kubelet[1761]: E0905 23:59:44.996268 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:44.996474 env[1322]: time="2025-09-05T23:59:44.996418927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 23:59:44.997064 env[1322]: time="2025-09-05T23:59:44.996938324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:72fda1784bb2ec485f5a3be850ea794c,Namespace:kube-system,Attempt:0,}" Sep 5 23:59:44.997310 env[1322]: time="2025-09-05T23:59:44.997258215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 23:59:45.171027 kubelet[1761]: E0905 23:59:45.170920 1761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Sep 5 23:59:45.304860 kubelet[1761]: I0905 23:59:45.304819 1761 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:59:45.305206 kubelet[1761]: E0905 23:59:45.305165 1761 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 5 23:59:45.399474 kubelet[1761]: W0905 23:59:45.399277 1761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 5 23:59:45.399474 kubelet[1761]: E0905 23:59:45.399352 1761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:59:45.452534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840812540.mount: Deactivated successfully. Sep 5 23:59:45.455003 kubelet[1761]: W0905 23:59:45.454919 1761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 5 23:59:45.455003 kubelet[1761]: E0905 23:59:45.454990 1761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:59:45.459612 env[1322]: time="2025-09-05T23:59:45.459132820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.463952 env[1322]: time="2025-09-05T23:59:45.463909708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.466150 env[1322]: time="2025-09-05T23:59:45.466120204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.467093 env[1322]: time="2025-09-05T23:59:45.467045986Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.477171 env[1322]: time="2025-09-05T23:59:45.476419155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.478127 env[1322]: time="2025-09-05T23:59:45.478090432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.479832 env[1322]: time="2025-09-05T23:59:45.479803092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.482245 env[1322]: time="2025-09-05T23:59:45.482162448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.484687 env[1322]: time="2025-09-05T23:59:45.483376632Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.486551 env[1322]: time="2025-09-05T23:59:45.486514349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.488267 env[1322]: time="2025-09-05T23:59:45.488241403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.492441 env[1322]: time="2025-09-05T23:59:45.492405502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:45.542609 env[1322]: time="2025-09-05T23:59:45.541922305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:59:45.542609 env[1322]: time="2025-09-05T23:59:45.541964607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:59:45.542609 env[1322]: time="2025-09-05T23:59:45.541981880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:59:45.544119 env[1322]: time="2025-09-05T23:59:45.542481676Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c12339161839f516240b375f4fdcd4fcd223058932b0c242241621a1216c653e pid=1816 runtime=io.containerd.runc.v2 Sep 5 23:59:45.544119 env[1322]: time="2025-09-05T23:59:45.543519532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:59:45.544119 env[1322]: time="2025-09-05T23:59:45.543627048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:59:45.544119 env[1322]: time="2025-09-05T23:59:45.543637963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:59:45.544119 env[1322]: time="2025-09-05T23:59:45.543817090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc1bf6deaa1b88bf3c8c9deee2325c4158162bd1a6b3052c43b1e63110b40391 pid=1829 runtime=io.containerd.runc.v2 Sep 5 23:59:45.544626 env[1322]: time="2025-09-05T23:59:45.544489096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:59:45.544626 env[1322]: time="2025-09-05T23:59:45.544525641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:59:45.544626 env[1322]: time="2025-09-05T23:59:45.544550471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:59:45.545492 env[1322]: time="2025-09-05T23:59:45.544833515Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb45744c5359e97d8cbede7fa8b9fd51b095590f4ee41053bdeb010ef39f6d4d pid=1811 runtime=io.containerd.runc.v2 Sep 5 23:59:45.588141 kubelet[1761]: W0905 23:59:45.588076 1761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 5 23:59:45.588290 kubelet[1761]: E0905 23:59:45.588151 1761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:59:45.601471 env[1322]: time="2025-09-05T23:59:45.601429785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:72fda1784bb2ec485f5a3be850ea794c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc1bf6deaa1b88bf3c8c9deee2325c4158162bd1a6b3052c43b1e63110b40391\"" Sep 5 23:59:45.602652 kubelet[1761]: E0905 23:59:45.602610 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:45.604522 env[1322]: time="2025-09-05T23:59:45.604478099Z" level=info msg="CreateContainer within sandbox \"fc1bf6deaa1b88bf3c8c9deee2325c4158162bd1a6b3052c43b1e63110b40391\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:59:45.610986 env[1322]: time="2025-09-05T23:59:45.609884529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c12339161839f516240b375f4fdcd4fcd223058932b0c242241621a1216c653e\"" Sep 5 23:59:45.611079 kubelet[1761]: E0905 23:59:45.610396 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:45.611713 env[1322]: time="2025-09-05T23:59:45.611678596Z" level=info msg="CreateContainer within sandbox \"c12339161839f516240b375f4fdcd4fcd223058932b0c242241621a1216c653e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:59:45.611900 env[1322]: time="2025-09-05T23:59:45.611873636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb45744c5359e97d8cbede7fa8b9fd51b095590f4ee41053bdeb010ef39f6d4d\"" Sep 5 23:59:45.612509 kubelet[1761]: E0905 23:59:45.612484 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:45.614039 env[1322]: time="2025-09-05T23:59:45.614005325Z" level=info msg="CreateContainer within sandbox \"eb45744c5359e97d8cbede7fa8b9fd51b095590f4ee41053bdeb010ef39f6d4d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:59:45.627260 env[1322]: time="2025-09-05T23:59:45.627207649Z" level=info msg="CreateContainer within sandbox \"fc1bf6deaa1b88bf3c8c9deee2325c4158162bd1a6b3052c43b1e63110b40391\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"69aadcee39e4dd65857e820ab60e535da6f328b15718b52ddd45d1e2ef851085\"" Sep 5 23:59:45.627891 env[1322]: time="2025-09-05T23:59:45.627832354Z" level=info msg="StartContainer for \"69aadcee39e4dd65857e820ab60e535da6f328b15718b52ddd45d1e2ef851085\"" Sep 5 23:59:45.640760 env[1322]: time="2025-09-05T23:59:45.640717608Z" level=info msg="CreateContainer within sandbox \"c12339161839f516240b375f4fdcd4fcd223058932b0c242241621a1216c653e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c3017848ed4b8a5a5627e639a0d07bd0b5d58e3d1114cd9a14db1b5a96c5dd9\"" Sep 5 23:59:45.641189 env[1322]: time="2025-09-05T23:59:45.641163746Z" level=info msg="StartContainer for \"3c3017848ed4b8a5a5627e639a0d07bd0b5d58e3d1114cd9a14db1b5a96c5dd9\"" Sep 5 23:59:45.642181 env[1322]: time="2025-09-05T23:59:45.642131150Z" level=info msg="CreateContainer within sandbox \"eb45744c5359e97d8cbede7fa8b9fd51b095590f4ee41053bdeb010ef39f6d4d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7cc00fd5df32f36a08624112c95187a2a7b070b2215eb5eff47d2ec4e336604d\"" Sep 5 23:59:45.642521 env[1322]: time="2025-09-05T23:59:45.642494922Z" level=info msg="StartContainer for \"7cc00fd5df32f36a08624112c95187a2a7b070b2215eb5eff47d2ec4e336604d\"" Sep 5 23:59:45.696517 env[1322]: time="2025-09-05T23:59:45.696473821Z" level=info msg="StartContainer for \"69aadcee39e4dd65857e820ab60e535da6f328b15718b52ddd45d1e2ef851085\" returns successfully" Sep 5 23:59:45.716062 env[1322]: time="2025-09-05T23:59:45.714977979Z" level=info msg="StartContainer for \"7cc00fd5df32f36a08624112c95187a2a7b070b2215eb5eff47d2ec4e336604d\" returns successfully" Sep 5 23:59:45.726564 env[1322]: time="2025-09-05T23:59:45.726506747Z" level=info msg="StartContainer for \"3c3017848ed4b8a5a5627e639a0d07bd0b5d58e3d1114cd9a14db1b5a96c5dd9\" returns successfully" Sep 5 23:59:46.107192 kubelet[1761]: I0905 23:59:46.106520 1761 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:59:46.598277 kubelet[1761]: E0905 23:59:46.598147 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:46.600360 kubelet[1761]: E0905 23:59:46.600337 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:46.602896 kubelet[1761]: E0905 23:59:46.602868 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:47.393218 kubelet[1761]: E0905 23:59:47.393183 1761 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 23:59:47.504677 kubelet[1761]: I0905 23:59:47.504626 1761 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 23:59:47.504677 kubelet[1761]: E0905 23:59:47.504666 1761 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 23:59:47.520988 kubelet[1761]: E0905 23:59:47.520941 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:59:47.604021 kubelet[1761]: E0905 23:59:47.603990 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:47.621353 kubelet[1761]: E0905 23:59:47.621323 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:59:47.722095 kubelet[1761]: E0905 23:59:47.722064 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:59:47.793276 kubelet[1761]: E0905 23:59:47.793245 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:47.822683 kubelet[1761]: E0905 23:59:47.822652 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:59:47.852491 kubelet[1761]: E0905 23:59:47.852468 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:48.534412 kubelet[1761]: I0905 23:59:48.534355 1761 apiserver.go:52] "Watching apiserver" Sep 5 23:59:48.568344 kubelet[1761]: I0905 23:59:48.568299 1761 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:59:49.686877 systemd[1]: Reloading. Sep 5 23:59:49.744118 /usr/lib/systemd/system-generators/torcx-generator[2057]: time="2025-09-05T23:59:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 5 23:59:49.744149 /usr/lib/systemd/system-generators/torcx-generator[2057]: time="2025-09-05T23:59:49Z" level=info msg="torcx already run" Sep 5 23:59:49.800008 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 5 23:59:49.800029 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 5 23:59:49.815335 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:59:49.885055 systemd[1]: Stopping kubelet.service... Sep 5 23:59:49.906937 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:59:49.907238 systemd[1]: Stopped kubelet.service. Sep 5 23:59:49.909922 kernel: kauditd_printk_skb: 44 callbacks suppressed Sep 5 23:59:49.909957 kernel: audit: type=1131 audit(1757116789.905:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:49.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:49.909047 systemd[1]: Starting kubelet.service... Sep 5 23:59:50.004615 systemd[1]: Started kubelet.service. Sep 5 23:59:50.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:50.008570 kernel: audit: type=1130 audit(1757116790.003:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 5 23:59:50.047421 kubelet[2108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:59:50.047421 kubelet[2108]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:59:50.047421 kubelet[2108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:59:50.047868 kubelet[2108]: I0905 23:59:50.047455 2108 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:59:50.053041 kubelet[2108]: I0905 23:59:50.052996 2108 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:59:50.053041 kubelet[2108]: I0905 23:59:50.053031 2108 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:59:50.053271 kubelet[2108]: I0905 23:59:50.053254 2108 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:59:50.054668 kubelet[2108]: I0905 23:59:50.054649 2108 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 23:59:50.056873 kubelet[2108]: I0905 23:59:50.056837 2108 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:59:50.060949 kubelet[2108]: E0905 23:59:50.060909 2108 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:59:50.060949 kubelet[2108]: I0905 23:59:50.060949 2108 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:59:50.063919 kubelet[2108]: I0905 23:59:50.063892 2108 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:59:50.064427 kubelet[2108]: I0905 23:59:50.064262 2108 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:59:50.064427 kubelet[2108]: I0905 23:59:50.064361 2108 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:59:50.064715 kubelet[2108]: I0905 23:59:50.064379 2108 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 23:59:50.064821 kubelet[2108]: I0905 23:59:50.064728 2108 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:59:50.064821 kubelet[2108]: I0905 23:59:50.064739 2108 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:59:50.064821 kubelet[2108]: I0905 23:59:50.064774 2108 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:59:50.064907 kubelet[2108]: I0905 23:59:50.064867 2108 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:59:50.064907 kubelet[2108]: I0905 23:59:50.064880 2108 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:59:50.064907 kubelet[2108]: I0905 23:59:50.064897 2108 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:59:50.065033 kubelet[2108]: I0905 23:59:50.064909 2108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:59:50.065887 kubelet[2108]: I0905 23:59:50.065863 2108 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 5 23:59:50.066349 kubelet[2108]: I0905 23:59:50.066330 2108 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:59:50.066818 kubelet[2108]: I0905 23:59:50.066792 2108 server.go:1274] "Started kubelet" Sep 5 23:59:50.071740 kubelet[2108]: I0905 23:59:50.068458 2108 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 5 23:59:50.071740 kubelet[2108]: I0905 23:59:50.068494 2108 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 5 23:59:50.071740 kubelet[2108]: I0905 23:59:50.068517 2108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:59:50.071740 kubelet[2108]: I0905 23:59:50.070075 2108 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:59:50.071740 kubelet[2108]: I0905 23:59:50.070923 2108 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:59:50.085969 kernel: audit: type=1400 audit(1757116790.066:213): avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 5 23:59:50.086002 kernel: audit: type=1401 audit(1757116790.066:213): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 5 23:59:50.086019 kernel: audit: type=1300 audit(1757116790.066:213): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000876300 a1=40000577d0 a2=40008762d0 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:50.066000 audit[2108]: AVC avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 5 23:59:50.066000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 5 23:59:50.066000 audit[2108]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000876300 a1=40000577d0 a2=40008762d0 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.073043 2108 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.073207 2108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:59:50.086150 kubelet[2108]: E0905 23:59:50.073242 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.074057 2108 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.074147 2108 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.074360 2108 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.074418 2108 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.074529 2108 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.074557 2108 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.075998 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.076807 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.076835 2108 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.076850 2108 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:59:50.086150 kubelet[2108]: E0905 23:59:50.076887 2108 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:59:50.086150 kubelet[2108]: I0905 23:59:50.077757 2108 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:59:50.092225 kernel: audit: type=1327 audit(1757116790.066:213): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 5 23:59:50.066000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 5 23:59:50.066000 audit[2108]: AVC avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 5 23:59:50.095320 kernel: audit: type=1400 audit(1757116790.066:214): avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 5 23:59:50.066000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 5 23:59:50.097465 kernel: audit: type=1401 audit(1757116790.066:214): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 5 23:59:50.066000 audit[2108]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000989280 a1=40000577e8 a2=4000876390 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:50.102028 kernel: audit: type=1300 audit(1757116790.066:214): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000989280 a1=40000577e8 a2=4000876390 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:50.066000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 5 23:59:50.107609 kernel: audit: type=1327 audit(1757116790.066:214): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 5 23:59:50.149953 kubelet[2108]: I0905 23:59:50.149920 2108 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:59:50.150128 kubelet[2108]: I0905 23:59:50.150112 2108 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:59:50.150191 kubelet[2108]: I0905 23:59:50.150183 2108 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:59:50.150388 kubelet[2108]: I0905 23:59:50.150373 2108 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:59:50.150474 kubelet[2108]: I0905 23:59:50.150449 2108 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:59:50.150527 kubelet[2108]: I0905 23:59:50.150519 2108 policy_none.go:49] "None policy: Start" Sep 5 23:59:50.151213 kubelet[2108]: I0905 23:59:50.151186 2108 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:59:50.151213 kubelet[2108]: I0905 23:59:50.151214 2108 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:59:50.151371 kubelet[2108]: I0905 23:59:50.151356 2108 state_mem.go:75] "Updated machine memory state" Sep 5 23:59:50.152555 kubelet[2108]: I0905 23:59:50.152514 2108 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:59:50.150000 audit[2108]: AVC avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 5 23:59:50.150000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 5 23:59:50.150000 audit[2108]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40012b6390 a1=40012f37e8 a2=40012b6360 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:50.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 5 23:59:50.152781 kubelet[2108]: I0905 23:59:50.152613 2108 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 5 23:59:50.152781 kubelet[2108]: I0905 23:59:50.152752 2108 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:59:50.152781 kubelet[2108]: I0905 23:59:50.152762 2108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:59:50.153237 kubelet[2108]: I0905 23:59:50.153206 2108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:59:50.256227 kubelet[2108]: I0905 23:59:50.256122 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:59:50.265613 kubelet[2108]: I0905 23:59:50.265581 2108 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 23:59:50.265725 kubelet[2108]: I0905 23:59:50.265675 2108 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 23:59:50.274897 kubelet[2108]: I0905 23:59:50.274845 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72fda1784bb2ec485f5a3be850ea794c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"72fda1784bb2ec485f5a3be850ea794c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:59:50.274995 kubelet[2108]: I0905 23:59:50.274896 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72fda1784bb2ec485f5a3be850ea794c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"72fda1784bb2ec485f5a3be850ea794c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:59:50.274995 kubelet[2108]: I0905 23:59:50.274949 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 23:59:50.274995 kubelet[2108]: I0905 23:59:50.274985 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72fda1784bb2ec485f5a3be850ea794c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"72fda1784bb2ec485f5a3be850ea794c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:59:50.375631 kubelet[2108]: I0905 23:59:50.375571 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:50.375631 kubelet[2108]: I0905 23:59:50.375618 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:50.375788 kubelet[2108]: I0905 23:59:50.375690 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:50.375788 kubelet[2108]: I0905 23:59:50.375720 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:50.375859 kubelet[2108]: I0905 23:59:50.375802 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:59:50.487236 kubelet[2108]: E0905 23:59:50.487206 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:50.488380 kubelet[2108]: E0905 23:59:50.488348 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:50.488442 kubelet[2108]: E0905 23:59:50.488319 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:51.066234 kubelet[2108]: I0905 23:59:51.066174 2108 apiserver.go:52] "Watching apiserver" Sep 5 23:59:51.074813 kubelet[2108]: I0905 23:59:51.074778 2108 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:59:51.124471 kubelet[2108]: E0905 23:59:51.124440 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:51.124861 kubelet[2108]: E0905 23:59:51.124828 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:51.125133 kubelet[2108]: E0905 23:59:51.124929 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:51.161789 kubelet[2108]: I0905 23:59:51.161724 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.161687629 podStartE2EDuration="1.161687629s" podCreationTimestamp="2025-09-05 23:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:59:51.160902533 +0000 UTC m=+1.152340391" watchObservedRunningTime="2025-09-05 23:59:51.161687629 +0000 UTC m=+1.153125487" Sep 5 23:59:51.180275 kubelet[2108]: I0905 23:59:51.180221 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.180203553 podStartE2EDuration="1.180203553s" podCreationTimestamp="2025-09-05 23:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:59:51.169825417 +0000 UTC m=+1.161263315" watchObservedRunningTime="2025-09-05 23:59:51.180203553 +0000 UTC m=+1.171641411" Sep 5 23:59:51.190835 kubelet[2108]: I0905 23:59:51.190784 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.190767856 podStartE2EDuration="1.190767856s" podCreationTimestamp="2025-09-05 23:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:59:51.180598801 +0000 UTC m=+1.172036659" watchObservedRunningTime="2025-09-05 23:59:51.190767856 +0000 UTC m=+1.182205714" Sep 5 23:59:52.125553 kubelet[2108]: E0905 23:59:52.125509 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:52.126065 kubelet[2108]: E0905 23:59:52.126044 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:53.127026 kubelet[2108]: E0905 23:59:53.126979 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:55.582045 kubelet[2108]: I0905 23:59:55.582015 2108 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:59:55.582629 env[1322]: time="2025-09-05T23:59:55.582589834Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:59:55.583059 kubelet[2108]: I0905 23:59:55.583038 2108 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:59:56.619463 kubelet[2108]: I0905 23:59:56.619393 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b13aab4-8f8c-4162-aa03-ae0c08a66862-kube-proxy\") pod \"kube-proxy-k29h6\" (UID: \"5b13aab4-8f8c-4162-aa03-ae0c08a66862\") " pod="kube-system/kube-proxy-k29h6" Sep 5 23:59:56.619463 kubelet[2108]: I0905 23:59:56.619430 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b13aab4-8f8c-4162-aa03-ae0c08a66862-xtables-lock\") pod \"kube-proxy-k29h6\" (UID: \"5b13aab4-8f8c-4162-aa03-ae0c08a66862\") " pod="kube-system/kube-proxy-k29h6" Sep 5 23:59:56.619463 kubelet[2108]: I0905 23:59:56.619448 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkx49\" (UniqueName: \"kubernetes.io/projected/5b13aab4-8f8c-4162-aa03-ae0c08a66862-kube-api-access-jkx49\") pod \"kube-proxy-k29h6\" (UID: \"5b13aab4-8f8c-4162-aa03-ae0c08a66862\") " pod="kube-system/kube-proxy-k29h6" Sep 5 23:59:56.619463 kubelet[2108]: I0905 23:59:56.619471 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b13aab4-8f8c-4162-aa03-ae0c08a66862-lib-modules\") pod \"kube-proxy-k29h6\" (UID: \"5b13aab4-8f8c-4162-aa03-ae0c08a66862\") " pod="kube-system/kube-proxy-k29h6" Sep 5 23:59:56.719842 kubelet[2108]: I0905 23:59:56.719800 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwnm\" (UniqueName: \"kubernetes.io/projected/3da01a03-3f93-46cc-97a0-0e304ecc8224-kube-api-access-spwnm\") pod \"tigera-operator-58fc44c59b-9d95j\" (UID: \"3da01a03-3f93-46cc-97a0-0e304ecc8224\") " pod="tigera-operator/tigera-operator-58fc44c59b-9d95j" Sep 5 23:59:56.720076 kubelet[2108]: I0905 23:59:56.720058 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3da01a03-3f93-46cc-97a0-0e304ecc8224-var-lib-calico\") pod \"tigera-operator-58fc44c59b-9d95j\" (UID: \"3da01a03-3f93-46cc-97a0-0e304ecc8224\") " pod="tigera-operator/tigera-operator-58fc44c59b-9d95j" Sep 5 23:59:56.727792 kubelet[2108]: I0905 23:59:56.727761 2108 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 5 23:59:56.846872 kubelet[2108]: E0905 23:59:56.846843 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:56.847399 env[1322]: time="2025-09-05T23:59:56.847359864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k29h6,Uid:5b13aab4-8f8c-4162-aa03-ae0c08a66862,Namespace:kube-system,Attempt:0,}" Sep 5 23:59:56.861442 env[1322]: time="2025-09-05T23:59:56.861091720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:59:56.861442 env[1322]: time="2025-09-05T23:59:56.861299952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:59:56.861442 env[1322]: time="2025-09-05T23:59:56.861313711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:59:56.861610 env[1322]: time="2025-09-05T23:59:56.861475505Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3c5929864f255166937c7f5b490d1521f958e7380a357d6c6e621b48367cbfa pid=2168 runtime=io.containerd.runc.v2 Sep 5 23:59:56.896656 env[1322]: time="2025-09-05T23:59:56.896150453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k29h6,Uid:5b13aab4-8f8c-4162-aa03-ae0c08a66862,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3c5929864f255166937c7f5b490d1521f958e7380a357d6c6e621b48367cbfa\"" Sep 5 23:59:56.896775 kubelet[2108]: E0905 23:59:56.896751 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:56.900334 env[1322]: time="2025-09-05T23:59:56.900291849Z" level=info msg="CreateContainer within sandbox \"d3c5929864f255166937c7f5b490d1521f958e7380a357d6c6e621b48367cbfa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:59:56.912261 env[1322]: time="2025-09-05T23:59:56.912221857Z" level=info msg="CreateContainer within sandbox \"d3c5929864f255166937c7f5b490d1521f958e7380a357d6c6e621b48367cbfa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8dd7ad0a3682b81124dfabb193d38cd87ebcffaf9614ffa0c55d93ee31bd11b6\"" Sep 5 23:59:56.913996 env[1322]: time="2025-09-05T23:59:56.913940549Z" level=info msg="StartContainer for \"8dd7ad0a3682b81124dfabb193d38cd87ebcffaf9614ffa0c55d93ee31bd11b6\"" Sep 5 23:59:56.963453 env[1322]: time="2025-09-05T23:59:56.963397672Z" level=info msg="StartContainer for \"8dd7ad0a3682b81124dfabb193d38cd87ebcffaf9614ffa0c55d93ee31bd11b6\" returns successfully" Sep 5 23:59:56.972412 env[1322]: time="2025-09-05T23:59:56.972313719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-9d95j,Uid:3da01a03-3f93-46cc-97a0-0e304ecc8224,Namespace:tigera-operator,Attempt:0,}" Sep 5 23:59:56.992595 env[1322]: time="2025-09-05T23:59:56.992268570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:59:56.992595 env[1322]: time="2025-09-05T23:59:56.992308768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:59:56.992595 env[1322]: time="2025-09-05T23:59:56.992319048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:59:56.992595 env[1322]: time="2025-09-05T23:59:56.992455162Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8313011946b97a323cfcd01a816bdfc502ea1e54b13432681582509c1c691168 pid=2242 runtime=io.containerd.runc.v2 Sep 5 23:59:57.044253 env[1322]: time="2025-09-05T23:59:57.044211769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-9d95j,Uid:3da01a03-3f93-46cc-97a0-0e304ecc8224,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8313011946b97a323cfcd01a816bdfc502ea1e54b13432681582509c1c691168\"" Sep 5 23:59:57.047700 env[1322]: time="2025-09-05T23:59:57.047667960Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 23:59:57.102564 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 5 23:59:57.102653 kernel: audit: type=1325 audit(1757116797.096:216): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.102692 kernel: audit: type=1300 audit(1757116797.096:216): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc585850 a2=0 a3=1 items=0 ppid=2218 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.096000 audit[2308]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.096000 audit[2308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc585850 a2=0 a3=1 items=0 ppid=2218 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 5 23:59:57.104412 kernel: audit: type=1327 audit(1757116797.096:216): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 5 23:59:57.104485 kernel: audit: type=1325 audit(1757116797.098:217): table=nat:39 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.098000 audit[2311]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.098000 audit[2311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9a54290 a2=0 a3=1 items=0 ppid=2218 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.108926 kernel: audit: type=1300 audit(1757116797.098:217): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9a54290 a2=0 a3=1 items=0 ppid=2218 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.108970 kernel: audit: type=1327 audit(1757116797.098:217): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 5 23:59:57.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 5 23:59:57.110412 kernel: audit: type=1325 audit(1757116797.099:218): table=filter:40 family=10 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.099000 audit[2312]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.099000 audit[2312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff526dcf0 a2=0 a3=1 items=0 ppid=2218 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.114739 kernel: audit: type=1300 audit(1757116797.099:218): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff526dcf0 a2=0 a3=1 items=0 ppid=2218 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.114786 kernel: audit: type=1327 audit(1757116797.099:218): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 5 23:59:57.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 5 23:59:57.102000 audit[2309]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.117600 kernel: audit: type=1325 audit(1757116797.102:219): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.102000 audit[2309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff103410 a2=0 a3=1 items=0 ppid=2218 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 5 23:59:57.104000 audit[2313]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.104000 audit[2313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5eaac80 a2=0 a3=1 items=0 ppid=2218 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 5 23:59:57.105000 audit[2314]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.105000 audit[2314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4b004d0 a2=0 a3=1 items=0 ppid=2218 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 5 23:59:57.135469 kubelet[2108]: E0905 23:59:57.135443 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:57.143989 kubelet[2108]: I0905 23:59:57.143928 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k29h6" podStartSLOduration=1.14391016 podStartE2EDuration="1.14391016s" podCreationTimestamp="2025-09-05 23:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:59:57.143856042 +0000 UTC m=+7.135293900" watchObservedRunningTime="2025-09-05 23:59:57.14391016 +0000 UTC m=+7.135347978" Sep 5 23:59:57.198000 audit[2315]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.198000 audit[2315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd6028560 a2=0 a3=1 items=0 ppid=2218 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 5 23:59:57.201000 audit[2317]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.201000 audit[2317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdbb02c70 a2=0 a3=1 items=0 ppid=2218 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.201000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 5 23:59:57.204000 audit[2320]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.204000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcb4ee280 a2=0 a3=1 items=0 ppid=2218 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 5 23:59:57.205000 audit[2321]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.205000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5a545a0 a2=0 a3=1 items=0 ppid=2218 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.205000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 5 23:59:57.207000 audit[2323]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.207000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff947de40 a2=0 a3=1 items=0 ppid=2218 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 5 23:59:57.208000 audit[2324]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.208000 audit[2324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9355ff0 a2=0 a3=1 items=0 ppid=2218 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 5 23:59:57.210000 audit[2326]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.210000 audit[2326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe264f3d0 a2=0 a3=1 items=0 ppid=2218 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 5 23:59:57.212000 audit[2329]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.212000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcf580990 a2=0 a3=1 items=0 ppid=2218 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.212000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 5 23:59:57.213000 audit[2330]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.213000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe49af6e0 a2=0 a3=1 items=0 ppid=2218 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 5 23:59:57.215000 audit[2332]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.215000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd8f307e0 a2=0 a3=1 items=0 ppid=2218 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 5 23:59:57.216000 audit[2333]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.216000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd661d3c0 a2=0 a3=1 items=0 ppid=2218 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 5 23:59:57.218000 audit[2335]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.218000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffde330eb0 a2=0 a3=1 items=0 ppid=2218 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.218000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 5 23:59:57.222000 audit[2338]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.222000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe565f570 a2=0 a3=1 items=0 ppid=2218 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 5 23:59:57.225000 audit[2341]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.225000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffccaec550 a2=0 a3=1 items=0 ppid=2218 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 5 23:59:57.226000 audit[2342]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.226000 audit[2342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff3bab280 a2=0 a3=1 items=0 ppid=2218 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 5 23:59:57.228000 audit[2344]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.228000 audit[2344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff686afa0 a2=0 a3=1 items=0 ppid=2218 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 5 23:59:57.230000 audit[2347]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.230000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc0e70320 a2=0 a3=1 items=0 ppid=2218 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 5 23:59:57.231000 audit[2348]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.231000 audit[2348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9f013a0 a2=0 a3=1 items=0 ppid=2218 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 5 23:59:57.234000 audit[2350]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 5 23:59:57.234000 audit[2350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd9436500 a2=0 a3=1 items=0 ppid=2218 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 5 23:59:57.253000 audit[2356]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 5 23:59:57.253000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff38b4500 a2=0 a3=1 items=0 ppid=2218 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 5 23:59:57.263000 audit[2356]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 5 23:59:57.263000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff38b4500 a2=0 a3=1 items=0 ppid=2218 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 5 23:59:57.264000 audit[2361]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.264000 audit[2361]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc8e648a0 a2=0 a3=1 items=0 ppid=2218 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 5 23:59:57.266000 audit[2363]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.266000 audit[2363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdfac91f0 a2=0 a3=1 items=0 ppid=2218 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 5 23:59:57.270000 audit[2366]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.270000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff52914a0 a2=0 a3=1 items=0 ppid=2218 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 5 23:59:57.271000 audit[2367]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.271000 audit[2367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4d7a7a0 a2=0 a3=1 items=0 ppid=2218 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 5 23:59:57.273000 audit[2369]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.273000 audit[2369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcd0598c0 a2=0 a3=1 items=0 ppid=2218 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 5 23:59:57.274000 audit[2370]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.274000 audit[2370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb372c60 a2=0 a3=1 items=0 ppid=2218 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 5 23:59:57.276000 audit[2372]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.276000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe47176e0 a2=0 a3=1 items=0 ppid=2218 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 5 23:59:57.279000 audit[2375]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.279000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe2d7b410 a2=0 a3=1 items=0 ppid=2218 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 5 23:59:57.280000 audit[2376]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.280000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2b1f120 a2=0 a3=1 items=0 ppid=2218 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 5 23:59:57.283000 audit[2378]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.283000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe36b6f10 a2=0 a3=1 items=0 ppid=2218 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 5 23:59:57.284000 audit[2379]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.284000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffddf8cdd0 a2=0 a3=1 items=0 ppid=2218 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 5 23:59:57.287000 audit[2381]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.287000 audit[2381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffefb82c00 a2=0 a3=1 items=0 ppid=2218 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 5 23:59:57.290000 audit[2384]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.290000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd9cd0310 a2=0 a3=1 items=0 ppid=2218 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 5 23:59:57.293000 audit[2387]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.293000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4fc24f0 a2=0 a3=1 items=0 ppid=2218 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.293000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 5 23:59:57.294000 audit[2388]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.294000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff9c2d4c0 a2=0 a3=1 items=0 ppid=2218 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 5 23:59:57.296000 audit[2390]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.296000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe5422f20 a2=0 a3=1 items=0 ppid=2218 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 5 23:59:57.299000 audit[2393]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.299000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe365b620 a2=0 a3=1 items=0 ppid=2218 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 5 23:59:57.300000 audit[2394]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.300000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd8984b0 a2=0 a3=1 items=0 ppid=2218 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.300000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 5 23:59:57.302000 audit[2396]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.302000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd3c08620 a2=0 a3=1 items=0 ppid=2218 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 5 23:59:57.303000 audit[2397]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.303000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3b449b0 a2=0 a3=1 items=0 ppid=2218 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.303000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 5 23:59:57.305000 audit[2399]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.305000 audit[2399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffaeefe50 a2=0 a3=1 items=0 ppid=2218 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.305000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 5 23:59:57.309000 audit[2402]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 5 23:59:57.309000 audit[2402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe08748f0 a2=0 a3=1 items=0 ppid=2218 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 5 23:59:57.311000 audit[2404]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 5 23:59:57.311000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd89ce1b0 a2=0 a3=1 items=0 ppid=2218 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.311000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 5 23:59:57.312000 audit[2404]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 5 23:59:57.312000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd89ce1b0 a2=0 a3=1 items=0 ppid=2218 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 5 23:59:57.312000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 5 23:59:57.387470 kubelet[2108]: E0905 23:59:57.387399 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:58.135940 kubelet[2108]: E0905 23:59:58.135906 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:59:58.204760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443043631.mount: Deactivated successfully. Sep 5 23:59:59.064925 env[1322]: time="2025-09-05T23:59:59.064883827Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:59.066380 env[1322]: time="2025-09-05T23:59:59.066337698Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:59.067766 env[1322]: time="2025-09-05T23:59:59.067729332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:59.069042 env[1322]: time="2025-09-05T23:59:59.069011809Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 5 23:59:59.069764 env[1322]: time="2025-09-05T23:59:59.069734825Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 23:59:59.071757 env[1322]: time="2025-09-05T23:59:59.071729638Z" level=info msg="CreateContainer within sandbox \"8313011946b97a323cfcd01a816bdfc502ea1e54b13432681582509c1c691168\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 23:59:59.082446 env[1322]: time="2025-09-05T23:59:59.082410960Z" level=info msg="CreateContainer within sandbox \"8313011946b97a323cfcd01a816bdfc502ea1e54b13432681582509c1c691168\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"57eb5db83ddf2513cdad457260ac48cd4a1c1707ec4e8e563282e613d7056ef4\"" Sep 5 23:59:59.083131 env[1322]: time="2025-09-05T23:59:59.083092138Z" level=info msg="StartContainer for \"57eb5db83ddf2513cdad457260ac48cd4a1c1707ec4e8e563282e613d7056ef4\"" Sep 5 23:59:59.138475 env[1322]: time="2025-09-05T23:59:59.138430205Z" level=info msg="StartContainer for \"57eb5db83ddf2513cdad457260ac48cd4a1c1707ec4e8e563282e613d7056ef4\" returns successfully" Sep 6 00:00:00.151594 kubelet[2108]: I0906 00:00:00.151437 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-9d95j" podStartSLOduration=2.126542964 podStartE2EDuration="4.151420007s" podCreationTimestamp="2025-09-05 23:59:56 +0000 UTC" firstStartedPulling="2025-09-05 23:59:57.045570518 +0000 UTC m=+7.037008376" lastFinishedPulling="2025-09-05 23:59:59.070447561 +0000 UTC m=+9.061885419" observedRunningTime="2025-09-06 00:00:00.150903223 +0000 UTC m=+10.142341081" watchObservedRunningTime="2025-09-06 00:00:00.151420007 +0000 UTC m=+10.142857865" Sep 6 00:00:01.543063 kubelet[2108]: E0906 00:00:01.543012 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:01.720574 kubelet[2108]: E0906 00:00:01.720504 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:02.146393 kubelet[2108]: E0906 00:00:02.146350 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:04.460827 sudo[1484]: pam_unix(sudo:session): session closed for user root Sep 6 00:00:04.459000 audit[1484]: USER_END pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:00:04.461600 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 6 00:00:04.461671 kernel: audit: type=1106 audit(1757116804.459:267): pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:00:04.459000 audit[1484]: CRED_DISP pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:00:04.466809 kernel: audit: type=1104 audit(1757116804.459:268): pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:00:04.469686 sshd[1478]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:04.469000 audit[1478]: USER_END pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:04.472756 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:38592.service: Deactivated successfully. Sep 6 00:00:04.479678 kernel: audit: type=1106 audit(1757116804.469:269): pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:04.479783 kernel: audit: type=1104 audit(1757116804.469:270): pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:04.469000 audit[1478]: CRED_DISP pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:04.476562 systemd-logind[1310]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:00:04.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.34:22-10.0.0.1:38592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:04.478197 systemd[1]: Started logrotate.service. Sep 6 00:00:04.478622 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:00:04.484871 kernel: audit: type=1131 audit(1757116804.471:271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.34:22-10.0.0.1:38592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:04.484992 kernel: audit: type=1130 audit(1757116804.476:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=logrotate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:04.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=logrotate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:04.485284 systemd-logind[1310]: Removed session 7. Sep 6 00:00:04.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=logrotate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:04.493496 systemd[1]: logrotate.service: Deactivated successfully. Sep 6 00:00:04.496688 kernel: audit: type=1131 audit(1757116804.492:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=logrotate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:05.844000 audit[2499]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:05.844000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe28e6390 a2=0 a3=1 items=0 ppid=2218 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:05.851392 kernel: audit: type=1325 audit(1757116805.844:274): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:05.851532 kernel: audit: type=1300 audit(1757116805.844:274): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe28e6390 a2=0 a3=1 items=0 ppid=2218 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:05.851571 kernel: audit: type=1327 audit(1757116805.844:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:05.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:05.855000 audit[2499]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:05.855000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe28e6390 a2=0 a3=1 items=0 ppid=2218 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:05.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:05.923000 audit[2501]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:05.923000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe1265cf0 a2=0 a3=1 items=0 ppid=2218 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:05.923000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:05.930000 audit[2501]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:05.930000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1265cf0 a2=0 a3=1 items=0 ppid=2218 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:05.930000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:07.008648 update_engine[1312]: I0906 00:00:07.008596 1312 update_attempter.cc:509] Updating boot flags... Sep 6 00:00:08.976000 audit[2518]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:08.976000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff8785dd0 a2=0 a3=1 items=0 ppid=2218 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:08.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:08.983000 audit[2518]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:08.983000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff8785dd0 a2=0 a3=1 items=0 ppid=2218 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:08.983000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:09.008000 audit[2520]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:09.008000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe71bed40 a2=0 a3=1 items=0 ppid=2218 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:09.008000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:09.018000 audit[2520]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:09.018000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe71bed40 a2=0 a3=1 items=0 ppid=2218 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:09.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:09.110920 kubelet[2108]: I0906 00:00:09.110869 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/95b0f53a-d839-47ca-b2a2-1a64c2a365b6-typha-certs\") pod \"calico-typha-6d469bfddd-f566g\" (UID: \"95b0f53a-d839-47ca-b2a2-1a64c2a365b6\") " pod="calico-system/calico-typha-6d469bfddd-f566g" Sep 6 00:00:09.111279 kubelet[2108]: I0906 00:00:09.110922 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nggjt\" (UniqueName: \"kubernetes.io/projected/95b0f53a-d839-47ca-b2a2-1a64c2a365b6-kube-api-access-nggjt\") pod \"calico-typha-6d469bfddd-f566g\" (UID: \"95b0f53a-d839-47ca-b2a2-1a64c2a365b6\") " pod="calico-system/calico-typha-6d469bfddd-f566g" Sep 6 00:00:09.111279 kubelet[2108]: I0906 00:00:09.110956 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95b0f53a-d839-47ca-b2a2-1a64c2a365b6-tigera-ca-bundle\") pod \"calico-typha-6d469bfddd-f566g\" (UID: \"95b0f53a-d839-47ca-b2a2-1a64c2a365b6\") " pod="calico-system/calico-typha-6d469bfddd-f566g" Sep 6 00:00:09.312508 kubelet[2108]: I0906 00:00:09.312376 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrwr\" (UniqueName: \"kubernetes.io/projected/2ff05a18-2c86-4900-b928-57c8a71bebf5-kube-api-access-rgrwr\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312508 kubelet[2108]: I0906 00:00:09.312445 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2ff05a18-2c86-4900-b928-57c8a71bebf5-policysync\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312508 kubelet[2108]: I0906 00:00:09.312486 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ff05a18-2c86-4900-b928-57c8a71bebf5-tigera-ca-bundle\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312508 kubelet[2108]: I0906 00:00:09.312505 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2ff05a18-2c86-4900-b928-57c8a71bebf5-var-run-calico\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312739 kubelet[2108]: I0906 00:00:09.312525 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2ff05a18-2c86-4900-b928-57c8a71bebf5-cni-bin-dir\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312739 kubelet[2108]: I0906 00:00:09.312554 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2ff05a18-2c86-4900-b928-57c8a71bebf5-cni-net-dir\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312739 kubelet[2108]: I0906 00:00:09.312584 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ff05a18-2c86-4900-b928-57c8a71bebf5-lib-modules\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312739 kubelet[2108]: I0906 00:00:09.312615 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2ff05a18-2c86-4900-b928-57c8a71bebf5-var-lib-calico\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312739 kubelet[2108]: I0906 00:00:09.312632 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2ff05a18-2c86-4900-b928-57c8a71bebf5-flexvol-driver-host\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312849 kubelet[2108]: I0906 00:00:09.312649 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ff05a18-2c86-4900-b928-57c8a71bebf5-xtables-lock\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312849 kubelet[2108]: I0906 00:00:09.312686 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2ff05a18-2c86-4900-b928-57c8a71bebf5-cni-log-dir\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.312849 kubelet[2108]: I0906 00:00:09.312702 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2ff05a18-2c86-4900-b928-57c8a71bebf5-node-certs\") pod \"calico-node-2cqgr\" (UID: \"2ff05a18-2c86-4900-b928-57c8a71bebf5\") " pod="calico-system/calico-node-2cqgr" Sep 6 00:00:09.319589 kubelet[2108]: E0906 00:00:09.319556 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:09.320479 env[1322]: time="2025-09-06T00:00:09.320433948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d469bfddd-f566g,Uid:95b0f53a-d839-47ca-b2a2-1a64c2a365b6,Namespace:calico-system,Attempt:0,}" Sep 6 00:00:09.339274 env[1322]: time="2025-09-06T00:00:09.339205374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:00:09.339407 env[1322]: time="2025-09-06T00:00:09.339253933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:00:09.339407 env[1322]: time="2025-09-06T00:00:09.339264573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:00:09.339680 env[1322]: time="2025-09-06T00:00:09.339637526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80b90ec9896958e7806cf2a7eb0c6f8159c4396bf3f384a5474fd7b88c1a4429 pid=2530 runtime=io.containerd.runc.v2 Sep 6 00:00:09.416250 kubelet[2108]: E0906 00:00:09.415158 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.416250 kubelet[2108]: W0906 00:00:09.415190 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.416250 kubelet[2108]: E0906 00:00:09.415236 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.423050 kubelet[2108]: E0906 00:00:09.423003 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.423050 kubelet[2108]: W0906 00:00:09.423037 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.423050 kubelet[2108]: E0906 00:00:09.423062 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.423979 kubelet[2108]: E0906 00:00:09.423934 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.423979 kubelet[2108]: W0906 00:00:09.423964 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.424137 kubelet[2108]: E0906 00:00:09.423987 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.424786 kubelet[2108]: E0906 00:00:09.424760 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.424786 kubelet[2108]: W0906 00:00:09.424781 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.424887 kubelet[2108]: E0906 00:00:09.424875 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.425001 kubelet[2108]: E0906 00:00:09.424984 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.425001 kubelet[2108]: W0906 00:00:09.424996 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.425066 kubelet[2108]: E0906 00:00:09.425004 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.431984 kubelet[2108]: E0906 00:00:09.428690 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.431984 kubelet[2108]: W0906 00:00:09.428717 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.431984 kubelet[2108]: E0906 00:00:09.428730 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.431984 kubelet[2108]: E0906 00:00:09.429038 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.431984 kubelet[2108]: W0906 00:00:09.429049 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.431984 kubelet[2108]: E0906 00:00:09.429059 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.475097 kubelet[2108]: E0906 00:00:09.472914 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:09.477707 env[1322]: time="2025-09-06T00:00:09.477664299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d469bfddd-f566g,Uid:95b0f53a-d839-47ca-b2a2-1a64c2a365b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"80b90ec9896958e7806cf2a7eb0c6f8159c4396bf3f384a5474fd7b88c1a4429\"" Sep 6 00:00:09.480074 kubelet[2108]: E0906 00:00:09.480037 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:09.480941 env[1322]: time="2025-09-06T00:00:09.480892555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 6 00:00:09.507149 kubelet[2108]: E0906 00:00:09.507117 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.507282 kubelet[2108]: W0906 00:00:09.507162 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.507282 kubelet[2108]: E0906 00:00:09.507181 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.507406 kubelet[2108]: E0906 00:00:09.507377 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.507406 kubelet[2108]: W0906 00:00:09.507405 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.507467 kubelet[2108]: E0906 00:00:09.507416 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.507596 kubelet[2108]: E0906 00:00:09.507585 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.507631 kubelet[2108]: W0906 00:00:09.507597 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.507631 kubelet[2108]: E0906 00:00:09.507605 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.507770 kubelet[2108]: E0906 00:00:09.507735 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.507770 kubelet[2108]: W0906 00:00:09.507746 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.507770 kubelet[2108]: E0906 00:00:09.507754 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.507899 kubelet[2108]: E0906 00:00:09.507889 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.507928 kubelet[2108]: W0906 00:00:09.507899 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.507928 kubelet[2108]: E0906 00:00:09.507907 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.508042 kubelet[2108]: E0906 00:00:09.508033 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.508067 kubelet[2108]: W0906 00:00:09.508043 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.508067 kubelet[2108]: E0906 00:00:09.508050 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.508181 kubelet[2108]: E0906 00:00:09.508172 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.508207 kubelet[2108]: W0906 00:00:09.508182 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.508207 kubelet[2108]: E0906 00:00:09.508189 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.508399 kubelet[2108]: E0906 00:00:09.508385 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.508399 kubelet[2108]: W0906 00:00:09.508399 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.508462 kubelet[2108]: E0906 00:00:09.508409 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.508586 kubelet[2108]: E0906 00:00:09.508574 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.508625 kubelet[2108]: W0906 00:00:09.508586 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.508625 kubelet[2108]: E0906 00:00:09.508595 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.508747 kubelet[2108]: E0906 00:00:09.508734 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.508772 kubelet[2108]: W0906 00:00:09.508747 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.508772 kubelet[2108]: E0906 00:00:09.508755 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.508897 kubelet[2108]: E0906 00:00:09.508888 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.508925 kubelet[2108]: W0906 00:00:09.508898 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.508925 kubelet[2108]: E0906 00:00:09.508907 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.509051 kubelet[2108]: E0906 00:00:09.509041 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.509078 kubelet[2108]: W0906 00:00:09.509052 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.509078 kubelet[2108]: E0906 00:00:09.509059 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.509209 kubelet[2108]: E0906 00:00:09.509199 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.509235 kubelet[2108]: W0906 00:00:09.509210 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.509235 kubelet[2108]: E0906 00:00:09.509217 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.509362 kubelet[2108]: E0906 00:00:09.509352 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.509393 kubelet[2108]: W0906 00:00:09.509362 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.509393 kubelet[2108]: E0906 00:00:09.509370 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.509504 kubelet[2108]: E0906 00:00:09.509494 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.509531 kubelet[2108]: W0906 00:00:09.509512 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.509531 kubelet[2108]: E0906 00:00:09.509520 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.509673 kubelet[2108]: E0906 00:00:09.509663 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.509700 kubelet[2108]: W0906 00:00:09.509673 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.509700 kubelet[2108]: E0906 00:00:09.509681 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.509820 kubelet[2108]: E0906 00:00:09.509811 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.509848 kubelet[2108]: W0906 00:00:09.509821 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.509848 kubelet[2108]: E0906 00:00:09.509828 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.509959 kubelet[2108]: E0906 00:00:09.509950 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.509983 kubelet[2108]: W0906 00:00:09.509959 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.509983 kubelet[2108]: E0906 00:00:09.509966 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.510084 kubelet[2108]: E0906 00:00:09.510075 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.510109 kubelet[2108]: W0906 00:00:09.510084 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.510109 kubelet[2108]: E0906 00:00:09.510091 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.510214 kubelet[2108]: E0906 00:00:09.510205 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.510239 kubelet[2108]: W0906 00:00:09.510215 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.510239 kubelet[2108]: E0906 00:00:09.510223 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.514845 kubelet[2108]: E0906 00:00:09.514820 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.514845 kubelet[2108]: W0906 00:00:09.514840 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.514927 kubelet[2108]: E0906 00:00:09.514851 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.514927 kubelet[2108]: I0906 00:00:09.514875 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/725f1740-cbad-4998-8e87-ef45cb66da35-socket-dir\") pod \"csi-node-driver-7tzrz\" (UID: \"725f1740-cbad-4998-8e87-ef45cb66da35\") " pod="calico-system/csi-node-driver-7tzrz" Sep 6 00:00:09.515089 kubelet[2108]: E0906 00:00:09.515065 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.515089 kubelet[2108]: W0906 00:00:09.515078 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.515089 kubelet[2108]: E0906 00:00:09.515087 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.515171 kubelet[2108]: I0906 00:00:09.515100 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/725f1740-cbad-4998-8e87-ef45cb66da35-kubelet-dir\") pod \"csi-node-driver-7tzrz\" (UID: \"725f1740-cbad-4998-8e87-ef45cb66da35\") " pod="calico-system/csi-node-driver-7tzrz" Sep 6 00:00:09.515263 kubelet[2108]: E0906 00:00:09.515250 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.515289 kubelet[2108]: W0906 00:00:09.515262 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.515289 kubelet[2108]: E0906 00:00:09.515271 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.515337 kubelet[2108]: I0906 00:00:09.515284 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z86kv\" (UniqueName: \"kubernetes.io/projected/725f1740-cbad-4998-8e87-ef45cb66da35-kube-api-access-z86kv\") pod \"csi-node-driver-7tzrz\" (UID: \"725f1740-cbad-4998-8e87-ef45cb66da35\") " pod="calico-system/csi-node-driver-7tzrz" Sep 6 00:00:09.515463 kubelet[2108]: E0906 00:00:09.515440 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.515463 kubelet[2108]: W0906 00:00:09.515453 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.515463 kubelet[2108]: E0906 00:00:09.515461 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.515546 kubelet[2108]: I0906 00:00:09.515475 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/725f1740-cbad-4998-8e87-ef45cb66da35-registration-dir\") pod \"csi-node-driver-7tzrz\" (UID: \"725f1740-cbad-4998-8e87-ef45cb66da35\") " pod="calico-system/csi-node-driver-7tzrz" Sep 6 00:00:09.515642 kubelet[2108]: E0906 00:00:09.515630 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.515668 kubelet[2108]: W0906 00:00:09.515642 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.515668 kubelet[2108]: E0906 00:00:09.515651 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.515668 kubelet[2108]: I0906 00:00:09.515663 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/725f1740-cbad-4998-8e87-ef45cb66da35-varrun\") pod \"csi-node-driver-7tzrz\" (UID: \"725f1740-cbad-4998-8e87-ef45cb66da35\") " pod="calico-system/csi-node-driver-7tzrz" Sep 6 00:00:09.515859 kubelet[2108]: E0906 00:00:09.515846 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.515859 kubelet[2108]: W0906 00:00:09.515857 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.515921 kubelet[2108]: E0906 00:00:09.515868 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.516017 kubelet[2108]: E0906 00:00:09.516006 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.516042 kubelet[2108]: W0906 00:00:09.516017 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.516042 kubelet[2108]: E0906 00:00:09.516027 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.516179 kubelet[2108]: E0906 00:00:09.516169 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.516206 kubelet[2108]: W0906 00:00:09.516180 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.516206 kubelet[2108]: E0906 00:00:09.516190 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.516321 kubelet[2108]: E0906 00:00:09.516311 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.516356 kubelet[2108]: W0906 00:00:09.516322 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.516356 kubelet[2108]: E0906 00:00:09.516341 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.516488 kubelet[2108]: E0906 00:00:09.516478 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.516516 kubelet[2108]: W0906 00:00:09.516488 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.516516 kubelet[2108]: E0906 00:00:09.516499 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.516674 kubelet[2108]: E0906 00:00:09.516663 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.516702 kubelet[2108]: W0906 00:00:09.516674 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.516725 kubelet[2108]: E0906 00:00:09.516710 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.516836 kubelet[2108]: E0906 00:00:09.516827 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.516862 kubelet[2108]: W0906 00:00:09.516836 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.516887 kubelet[2108]: E0906 00:00:09.516870 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.516987 kubelet[2108]: E0906 00:00:09.516978 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.517010 kubelet[2108]: W0906 00:00:09.516987 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.517010 kubelet[2108]: E0906 00:00:09.516997 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.517126 kubelet[2108]: E0906 00:00:09.517118 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.517153 kubelet[2108]: W0906 00:00:09.517127 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.517153 kubelet[2108]: E0906 00:00:09.517134 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.517275 kubelet[2108]: E0906 00:00:09.517266 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.517300 kubelet[2108]: W0906 00:00:09.517276 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.517300 kubelet[2108]: E0906 00:00:09.517283 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.555066 env[1322]: time="2025-09-06T00:00:09.555013280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2cqgr,Uid:2ff05a18-2c86-4900-b928-57c8a71bebf5,Namespace:calico-system,Attempt:0,}" Sep 6 00:00:09.580578 env[1322]: time="2025-09-06T00:00:09.574520372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:00:09.580578 env[1322]: time="2025-09-06T00:00:09.574573171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:00:09.580578 env[1322]: time="2025-09-06T00:00:09.574583691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:00:09.580578 env[1322]: time="2025-09-06T00:00:09.574750128Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3dd4bb6b660aacb72019221ae7bec6004aeb2ee7dcb59f69038e52e1413752fb pid=2626 runtime=io.containerd.runc.v2 Sep 6 00:00:09.620314 kubelet[2108]: E0906 00:00:09.620147 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.620314 kubelet[2108]: W0906 00:00:09.620167 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.620314 kubelet[2108]: E0906 00:00:09.620185 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.620784 kubelet[2108]: E0906 00:00:09.620652 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.620784 kubelet[2108]: W0906 00:00:09.620665 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.620784 kubelet[2108]: E0906 00:00:09.620677 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.621084 kubelet[2108]: E0906 00:00:09.620964 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.621084 kubelet[2108]: W0906 00:00:09.620975 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.621084 kubelet[2108]: E0906 00:00:09.620986 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.621394 kubelet[2108]: E0906 00:00:09.621250 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.621394 kubelet[2108]: W0906 00:00:09.621261 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.621394 kubelet[2108]: E0906 00:00:09.621271 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.621804 kubelet[2108]: E0906 00:00:09.621664 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.621804 kubelet[2108]: W0906 00:00:09.621677 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.621804 kubelet[2108]: E0906 00:00:09.621688 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.622080 kubelet[2108]: E0906 00:00:09.621968 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.622080 kubelet[2108]: W0906 00:00:09.621978 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.622080 kubelet[2108]: E0906 00:00:09.622058 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.622349 kubelet[2108]: E0906 00:00:09.622232 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.622349 kubelet[2108]: W0906 00:00:09.622242 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.622349 kubelet[2108]: E0906 00:00:09.622318 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.622617 kubelet[2108]: E0906 00:00:09.622489 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.622617 kubelet[2108]: W0906 00:00:09.622498 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.622617 kubelet[2108]: E0906 00:00:09.622588 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.622882 kubelet[2108]: E0906 00:00:09.622768 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.622882 kubelet[2108]: W0906 00:00:09.622777 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.622882 kubelet[2108]: E0906 00:00:09.622863 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.624243 kubelet[2108]: E0906 00:00:09.623024 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.624243 kubelet[2108]: W0906 00:00:09.623033 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.624243 kubelet[2108]: E0906 00:00:09.623108 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.625039 kubelet[2108]: E0906 00:00:09.624444 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.625039 kubelet[2108]: W0906 00:00:09.624457 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.625039 kubelet[2108]: E0906 00:00:09.624479 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.625039 kubelet[2108]: E0906 00:00:09.624778 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.625039 kubelet[2108]: W0906 00:00:09.624795 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.625039 kubelet[2108]: E0906 00:00:09.624878 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.628144 kubelet[2108]: E0906 00:00:09.628049 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.628144 kubelet[2108]: W0906 00:00:09.628061 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.628144 kubelet[2108]: E0906 00:00:09.628132 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.629792 kubelet[2108]: E0906 00:00:09.629659 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.629792 kubelet[2108]: W0906 00:00:09.629678 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.629792 kubelet[2108]: E0906 00:00:09.629772 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.630110 kubelet[2108]: E0906 00:00:09.630005 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.630110 kubelet[2108]: W0906 00:00:09.630015 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.630110 kubelet[2108]: E0906 00:00:09.630092 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.630370 kubelet[2108]: E0906 00:00:09.630340 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.630467 kubelet[2108]: W0906 00:00:09.630453 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.630624 kubelet[2108]: E0906 00:00:09.630613 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.630865 kubelet[2108]: E0906 00:00:09.630853 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.630940 kubelet[2108]: W0906 00:00:09.630927 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.631035 kubelet[2108]: E0906 00:00:09.631024 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.631252 kubelet[2108]: E0906 00:00:09.631240 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.631347 kubelet[2108]: W0906 00:00:09.631332 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.631455 kubelet[2108]: E0906 00:00:09.631444 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.631758 kubelet[2108]: E0906 00:00:09.631736 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.631855 kubelet[2108]: W0906 00:00:09.631842 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.632036 kubelet[2108]: E0906 00:00:09.632010 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.632287 kubelet[2108]: E0906 00:00:09.632263 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.632425 kubelet[2108]: W0906 00:00:09.632410 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.632640 kubelet[2108]: E0906 00:00:09.632625 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.633178 kubelet[2108]: E0906 00:00:09.633143 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.633282 kubelet[2108]: W0906 00:00:09.633268 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.633465 kubelet[2108]: E0906 00:00:09.633440 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.635737 kubelet[2108]: E0906 00:00:09.635721 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.635848 kubelet[2108]: W0906 00:00:09.635834 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.636017 kubelet[2108]: E0906 00:00:09.636000 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.636206 kubelet[2108]: E0906 00:00:09.636194 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.636309 kubelet[2108]: W0906 00:00:09.636295 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.636517 kubelet[2108]: E0906 00:00:09.636503 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.636919 kubelet[2108]: E0906 00:00:09.636903 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.637013 kubelet[2108]: W0906 00:00:09.636999 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.637180 kubelet[2108]: E0906 00:00:09.637139 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.637298 kubelet[2108]: E0906 00:00:09.637286 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.637417 kubelet[2108]: W0906 00:00:09.637403 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.637491 kubelet[2108]: E0906 00:00:09.637479 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.650584 kubelet[2108]: E0906 00:00:09.649878 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:09.650584 kubelet[2108]: W0906 00:00:09.649897 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:09.650584 kubelet[2108]: E0906 00:00:09.649915 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:09.656031 env[1322]: time="2025-09-06T00:00:09.655996511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2cqgr,Uid:2ff05a18-2c86-4900-b928-57c8a71bebf5,Namespace:calico-system,Attempt:0,} returns sandbox id \"3dd4bb6b660aacb72019221ae7bec6004aeb2ee7dcb59f69038e52e1413752fb\"" Sep 6 00:00:10.043000 audit[2687]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:10.046960 kernel: kauditd_printk_skb: 21 callbacks suppressed Sep 6 00:00:10.047045 kernel: audit: type=1325 audit(1757116810.043:282): table=filter:97 family=2 entries=20 op=nft_register_rule pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:10.047071 kernel: audit: type=1300 audit(1757116810.043:282): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe6eb4b90 a2=0 a3=1 items=0 ppid=2218 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:10.043000 audit[2687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe6eb4b90 a2=0 a3=1 items=0 ppid=2218 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:10.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:10.051869 kernel: audit: type=1327 audit(1757116810.043:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:10.052000 audit[2687]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:10.052000 audit[2687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe6eb4b90 a2=0 a3=1 items=0 ppid=2218 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:10.059803 kernel: audit: type=1325 audit(1757116810.052:283): table=nat:98 family=2 entries=12 op=nft_register_rule pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:10.059868 kernel: audit: type=1300 audit(1757116810.052:283): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe6eb4b90 a2=0 a3=1 items=0 ppid=2218 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:10.059889 kernel: audit: type=1327 audit(1757116810.052:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:10.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:11.077816 kubelet[2108]: E0906 00:00:11.077768 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:13.077336 kubelet[2108]: E0906 00:00:13.077251 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:15.077273 kubelet[2108]: E0906 00:00:15.077225 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:17.077763 kubelet[2108]: E0906 00:00:17.077695 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:19.077701 kubelet[2108]: E0906 00:00:19.077647 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:21.078128 kubelet[2108]: E0906 00:00:21.078064 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:21.778288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996486466.mount: Deactivated successfully. Sep 6 00:00:22.376686 env[1322]: time="2025-09-06T00:00:22.376638754Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:22.378751 env[1322]: time="2025-09-06T00:00:22.378713050Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:22.380299 env[1322]: time="2025-09-06T00:00:22.380271313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:22.382169 env[1322]: time="2025-09-06T00:00:22.382140931Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:22.382911 env[1322]: time="2025-09-06T00:00:22.382884923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 6 00:00:22.386818 env[1322]: time="2025-09-06T00:00:22.386754959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 6 00:00:22.400724 env[1322]: time="2025-09-06T00:00:22.400350444Z" level=info msg="CreateContainer within sandbox \"80b90ec9896958e7806cf2a7eb0c6f8159c4396bf3f384a5474fd7b88c1a4429\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 6 00:00:22.410765 env[1322]: time="2025-09-06T00:00:22.410708566Z" level=info msg="CreateContainer within sandbox \"80b90ec9896958e7806cf2a7eb0c6f8159c4396bf3f384a5474fd7b88c1a4429\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f70859bd78c5d16477351272b3b41618cabce60338f051fa92450e19e39a52df\"" Sep 6 00:00:22.411554 env[1322]: time="2025-09-06T00:00:22.411514197Z" level=info msg="StartContainer for \"f70859bd78c5d16477351272b3b41618cabce60338f051fa92450e19e39a52df\"" Sep 6 00:00:22.501252 env[1322]: time="2025-09-06T00:00:22.501175735Z" level=info msg="StartContainer for \"f70859bd78c5d16477351272b3b41618cabce60338f051fa92450e19e39a52df\" returns successfully" Sep 6 00:00:23.079048 kubelet[2108]: E0906 00:00:23.078995 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:23.191707 kubelet[2108]: E0906 00:00:23.191670 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:23.212289 kubelet[2108]: E0906 00:00:23.212254 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.212429 kubelet[2108]: W0906 00:00:23.212293 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.212429 kubelet[2108]: E0906 00:00:23.212313 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.212654 kubelet[2108]: E0906 00:00:23.212638 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.212713 kubelet[2108]: W0906 00:00:23.212658 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.212713 kubelet[2108]: E0906 00:00:23.212681 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.212892 kubelet[2108]: E0906 00:00:23.212877 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.212934 kubelet[2108]: W0906 00:00:23.212925 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.212967 kubelet[2108]: E0906 00:00:23.212937 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.218121 kubelet[2108]: E0906 00:00:23.218100 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.218121 kubelet[2108]: W0906 00:00:23.218120 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.218218 kubelet[2108]: E0906 00:00:23.218132 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.218376 kubelet[2108]: E0906 00:00:23.218358 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.218376 kubelet[2108]: W0906 00:00:23.218374 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.218447 kubelet[2108]: E0906 00:00:23.218384 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.219301 kubelet[2108]: E0906 00:00:23.218906 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.219301 kubelet[2108]: W0906 00:00:23.218922 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.219301 kubelet[2108]: E0906 00:00:23.218935 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.219301 kubelet[2108]: E0906 00:00:23.219127 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.219301 kubelet[2108]: W0906 00:00:23.219137 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.219301 kubelet[2108]: E0906 00:00:23.219149 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.219301 kubelet[2108]: E0906 00:00:23.219308 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.219547 kubelet[2108]: W0906 00:00:23.219316 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.219547 kubelet[2108]: E0906 00:00:23.219325 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.219547 kubelet[2108]: E0906 00:00:23.219483 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.219547 kubelet[2108]: W0906 00:00:23.219491 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.219547 kubelet[2108]: E0906 00:00:23.219499 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.219681 kubelet[2108]: E0906 00:00:23.219662 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.219681 kubelet[2108]: W0906 00:00:23.219677 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.219734 kubelet[2108]: E0906 00:00:23.219690 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.220845 kubelet[2108]: E0906 00:00:23.219850 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.220845 kubelet[2108]: W0906 00:00:23.219864 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.220845 kubelet[2108]: E0906 00:00:23.219873 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.220845 kubelet[2108]: E0906 00:00:23.220031 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.220845 kubelet[2108]: W0906 00:00:23.220040 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.220845 kubelet[2108]: E0906 00:00:23.220049 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.220845 kubelet[2108]: E0906 00:00:23.220219 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.220845 kubelet[2108]: W0906 00:00:23.220232 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.220845 kubelet[2108]: E0906 00:00:23.220241 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.220845 kubelet[2108]: E0906 00:00:23.220458 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.221133 kubelet[2108]: W0906 00:00:23.220469 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.221133 kubelet[2108]: E0906 00:00:23.220478 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.221133 kubelet[2108]: E0906 00:00:23.220781 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.221133 kubelet[2108]: W0906 00:00:23.220801 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.221133 kubelet[2108]: E0906 00:00:23.220811 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.221133 kubelet[2108]: E0906 00:00:23.221110 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.221133 kubelet[2108]: W0906 00:00:23.221122 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.221133 kubelet[2108]: E0906 00:00:23.221132 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.221330 kubelet[2108]: E0906 00:00:23.221320 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.221356 kubelet[2108]: W0906 00:00:23.221331 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.221356 kubelet[2108]: E0906 00:00:23.221340 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.221663 kubelet[2108]: E0906 00:00:23.221498 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.222084 kubelet[2108]: W0906 00:00:23.222049 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.222123 kubelet[2108]: E0906 00:00:23.222088 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.222473 kubelet[2108]: E0906 00:00:23.222452 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.222473 kubelet[2108]: W0906 00:00:23.222470 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.222577 kubelet[2108]: E0906 00:00:23.222480 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.224045 kubelet[2108]: E0906 00:00:23.224024 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.224045 kubelet[2108]: W0906 00:00:23.224042 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.224159 kubelet[2108]: E0906 00:00:23.224055 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.224257 kubelet[2108]: E0906 00:00:23.224234 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.224299 kubelet[2108]: W0906 00:00:23.224287 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.224365 kubelet[2108]: E0906 00:00:23.224339 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.224446 kubelet[2108]: E0906 00:00:23.224436 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.224479 kubelet[2108]: W0906 00:00:23.224446 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.224504 kubelet[2108]: E0906 00:00:23.224481 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.225395 kubelet[2108]: E0906 00:00:23.225359 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.225395 kubelet[2108]: W0906 00:00:23.225379 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.225395 kubelet[2108]: I0906 00:00:23.225343 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d469bfddd-f566g" podStartSLOduration=2.319710449 podStartE2EDuration="15.225331653s" podCreationTimestamp="2025-09-06 00:00:08 +0000 UTC" firstStartedPulling="2025-09-06 00:00:09.480616721 +0000 UTC m=+19.472054579" lastFinishedPulling="2025-09-06 00:00:22.386237965 +0000 UTC m=+32.377675783" observedRunningTime="2025-09-06 00:00:23.223616432 +0000 UTC m=+33.215054290" watchObservedRunningTime="2025-09-06 00:00:23.225331653 +0000 UTC m=+33.216769511" Sep 6 00:00:23.225598 kubelet[2108]: E0906 00:00:23.225408 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.225699 kubelet[2108]: E0906 00:00:23.225682 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.225699 kubelet[2108]: W0906 00:00:23.225697 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.225754 kubelet[2108]: E0906 00:00:23.225712 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.226214 kubelet[2108]: E0906 00:00:23.226186 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.226214 kubelet[2108]: W0906 00:00:23.226202 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.226273 kubelet[2108]: E0906 00:00:23.226234 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.227101 kubelet[2108]: E0906 00:00:23.227077 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.227101 kubelet[2108]: W0906 00:00:23.227093 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.227160 kubelet[2108]: E0906 00:00:23.227110 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.227300 kubelet[2108]: E0906 00:00:23.227289 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.227324 kubelet[2108]: W0906 00:00:23.227299 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.227352 kubelet[2108]: E0906 00:00:23.227339 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.227460 kubelet[2108]: E0906 00:00:23.227449 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.227484 kubelet[2108]: W0906 00:00:23.227460 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.227510 kubelet[2108]: E0906 00:00:23.227492 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.227618 kubelet[2108]: E0906 00:00:23.227606 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.227646 kubelet[2108]: W0906 00:00:23.227619 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.227646 kubelet[2108]: E0906 00:00:23.227636 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.228252 kubelet[2108]: E0906 00:00:23.228228 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.228252 kubelet[2108]: W0906 00:00:23.228243 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.228307 kubelet[2108]: E0906 00:00:23.228257 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.228723 kubelet[2108]: E0906 00:00:23.228711 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.228748 kubelet[2108]: W0906 00:00:23.228725 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.228748 kubelet[2108]: E0906 00:00:23.228739 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.229029 kubelet[2108]: E0906 00:00:23.229017 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.229029 kubelet[2108]: W0906 00:00:23.229029 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.229087 kubelet[2108]: E0906 00:00:23.229038 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:23.229206 kubelet[2108]: E0906 00:00:23.229196 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:23.229231 kubelet[2108]: W0906 00:00:23.229206 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:23.229231 kubelet[2108]: E0906 00:00:23.229216 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.192493 kubelet[2108]: I0906 00:00:24.192464 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:00:24.193424 kubelet[2108]: E0906 00:00:24.193403 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:24.227677 kubelet[2108]: E0906 00:00:24.227650 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.227845 kubelet[2108]: W0906 00:00:24.227826 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.227921 kubelet[2108]: E0906 00:00:24.227907 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.228712 kubelet[2108]: E0906 00:00:24.228693 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.228813 kubelet[2108]: W0906 00:00:24.228799 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.228874 kubelet[2108]: E0906 00:00:24.228862 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.229562 kubelet[2108]: E0906 00:00:24.229547 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.229674 kubelet[2108]: W0906 00:00:24.229658 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.229736 kubelet[2108]: E0906 00:00:24.229725 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.230182 kubelet[2108]: E0906 00:00:24.230166 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.230274 kubelet[2108]: W0906 00:00:24.230260 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.230334 kubelet[2108]: E0906 00:00:24.230324 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.231009 kubelet[2108]: E0906 00:00:24.230994 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.231111 kubelet[2108]: W0906 00:00:24.231097 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.231173 kubelet[2108]: E0906 00:00:24.231161 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.232141 kubelet[2108]: E0906 00:00:24.232126 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.232236 kubelet[2108]: W0906 00:00:24.232222 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.232320 kubelet[2108]: E0906 00:00:24.232306 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.232582 kubelet[2108]: E0906 00:00:24.232569 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.232676 kubelet[2108]: W0906 00:00:24.232661 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.232742 kubelet[2108]: E0906 00:00:24.232732 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.233098 kubelet[2108]: E0906 00:00:24.233083 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.233182 kubelet[2108]: W0906 00:00:24.233168 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.233242 kubelet[2108]: E0906 00:00:24.233232 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.234480 kubelet[2108]: E0906 00:00:24.234465 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.234591 kubelet[2108]: W0906 00:00:24.234576 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.234671 kubelet[2108]: E0906 00:00:24.234641 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.234901 kubelet[2108]: E0906 00:00:24.234887 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.234990 kubelet[2108]: W0906 00:00:24.234977 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.235061 kubelet[2108]: E0906 00:00:24.235038 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.235663 kubelet[2108]: E0906 00:00:24.235649 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.235753 kubelet[2108]: W0906 00:00:24.235738 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.235812 kubelet[2108]: E0906 00:00:24.235801 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.239160 kubelet[2108]: E0906 00:00:24.239144 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.239254 kubelet[2108]: W0906 00:00:24.239241 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.239313 kubelet[2108]: E0906 00:00:24.239302 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.239675 kubelet[2108]: E0906 00:00:24.239659 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.239782 kubelet[2108]: W0906 00:00:24.239768 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.239838 kubelet[2108]: E0906 00:00:24.239827 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.240076 kubelet[2108]: E0906 00:00:24.240060 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.240177 kubelet[2108]: W0906 00:00:24.240164 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.240259 kubelet[2108]: E0906 00:00:24.240246 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.240478 kubelet[2108]: E0906 00:00:24.240466 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.240581 kubelet[2108]: W0906 00:00:24.240568 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.240638 kubelet[2108]: E0906 00:00:24.240628 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.240948 kubelet[2108]: E0906 00:00:24.240934 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.241066 kubelet[2108]: W0906 00:00:24.241042 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.241135 kubelet[2108]: E0906 00:00:24.241123 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.241438 kubelet[2108]: E0906 00:00:24.241424 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.241523 kubelet[2108]: W0906 00:00:24.241510 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.241638 kubelet[2108]: E0906 00:00:24.241627 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.241917 kubelet[2108]: E0906 00:00:24.241902 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.242006 kubelet[2108]: W0906 00:00:24.241993 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.242078 kubelet[2108]: E0906 00:00:24.242066 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.242487 kubelet[2108]: E0906 00:00:24.242471 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.242600 kubelet[2108]: W0906 00:00:24.242586 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.242830 kubelet[2108]: E0906 00:00:24.242708 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.243113 kubelet[2108]: E0906 00:00:24.243096 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.243188 kubelet[2108]: W0906 00:00:24.243175 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.243283 kubelet[2108]: E0906 00:00:24.243262 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.244496 kubelet[2108]: E0906 00:00:24.244481 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.244603 kubelet[2108]: W0906 00:00:24.244589 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.245249 kubelet[2108]: E0906 00:00:24.244771 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.247103 kubelet[2108]: E0906 00:00:24.247088 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.247207 kubelet[2108]: W0906 00:00:24.247193 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.249012 kubelet[2108]: E0906 00:00:24.247364 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.249152 kubelet[2108]: E0906 00:00:24.249137 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.249214 kubelet[2108]: W0906 00:00:24.249202 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.250156 kubelet[2108]: E0906 00:00:24.249387 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.250356 kubelet[2108]: E0906 00:00:24.250341 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.250427 kubelet[2108]: W0906 00:00:24.250415 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.250593 kubelet[2108]: E0906 00:00:24.250516 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.251886 kubelet[2108]: E0906 00:00:24.251870 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.251989 kubelet[2108]: W0906 00:00:24.251974 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.252282 kubelet[2108]: E0906 00:00:24.252266 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.252369 kubelet[2108]: W0906 00:00:24.252356 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.252628 kubelet[2108]: E0906 00:00:24.252613 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.252995 kubelet[2108]: W0906 00:00:24.252979 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.253078 kubelet[2108]: E0906 00:00:24.253065 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.253345 kubelet[2108]: E0906 00:00:24.252861 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.253462 kubelet[2108]: E0906 00:00:24.252868 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.254588 kubelet[2108]: E0906 00:00:24.254571 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.254691 kubelet[2108]: W0906 00:00:24.254677 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.254764 kubelet[2108]: E0906 00:00:24.254752 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.255160 kubelet[2108]: E0906 00:00:24.255143 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.255248 kubelet[2108]: W0906 00:00:24.255235 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.255306 kubelet[2108]: E0906 00:00:24.255295 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.255577 kubelet[2108]: E0906 00:00:24.255563 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.255666 kubelet[2108]: W0906 00:00:24.255652 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.255731 kubelet[2108]: E0906 00:00:24.255720 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.257989 kubelet[2108]: E0906 00:00:24.257972 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.258097 kubelet[2108]: W0906 00:00:24.258082 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.258214 kubelet[2108]: E0906 00:00:24.258185 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.258441 kubelet[2108]: E0906 00:00:24.258426 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.258520 kubelet[2108]: W0906 00:00:24.258508 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.258599 kubelet[2108]: E0906 00:00:24.258587 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:24.258975 kubelet[2108]: E0906 00:00:24.258960 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:00:24.259090 kubelet[2108]: W0906 00:00:24.259075 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:00:24.259155 kubelet[2108]: E0906 00:00:24.259143 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:00:25.077292 kubelet[2108]: E0906 00:00:25.077240 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:27.077526 kubelet[2108]: E0906 00:00:27.077486 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:28.339755 env[1322]: time="2025-09-06T00:00:28.339711163Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:28.341121 env[1322]: time="2025-09-06T00:00:28.341092710Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:28.342486 env[1322]: time="2025-09-06T00:00:28.342446458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:28.344322 env[1322]: time="2025-09-06T00:00:28.344302000Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:28.344780 env[1322]: time="2025-09-06T00:00:28.344757596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 6 00:00:28.353119 env[1322]: time="2025-09-06T00:00:28.353029559Z" level=info msg="CreateContainer within sandbox \"3dd4bb6b660aacb72019221ae7bec6004aeb2ee7dcb59f69038e52e1413752fb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 6 00:00:28.369526 env[1322]: time="2025-09-06T00:00:28.369441166Z" level=info msg="CreateContainer within sandbox \"3dd4bb6b660aacb72019221ae7bec6004aeb2ee7dcb59f69038e52e1413752fb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ddcc14599df18ed6fe8fd9e888063fd865befb4b387e971ede3954af04eb2f9d\"" Sep 6 00:00:28.371016 env[1322]: time="2025-09-06T00:00:28.370980672Z" level=info msg="StartContainer for \"ddcc14599df18ed6fe8fd9e888063fd865befb4b387e971ede3954af04eb2f9d\"" Sep 6 00:00:28.434448 env[1322]: time="2025-09-06T00:00:28.434335841Z" level=info msg="StartContainer for \"ddcc14599df18ed6fe8fd9e888063fd865befb4b387e971ede3954af04eb2f9d\" returns successfully" Sep 6 00:00:28.455470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddcc14599df18ed6fe8fd9e888063fd865befb4b387e971ede3954af04eb2f9d-rootfs.mount: Deactivated successfully. Sep 6 00:00:28.467654 env[1322]: time="2025-09-06T00:00:28.467610051Z" level=info msg="shim disconnected" id=ddcc14599df18ed6fe8fd9e888063fd865befb4b387e971ede3954af04eb2f9d Sep 6 00:00:28.467654 env[1322]: time="2025-09-06T00:00:28.467655531Z" level=warning msg="cleaning up after shim disconnected" id=ddcc14599df18ed6fe8fd9e888063fd865befb4b387e971ede3954af04eb2f9d namespace=k8s.io Sep 6 00:00:28.467830 env[1322]: time="2025-09-06T00:00:28.467664851Z" level=info msg="cleaning up dead shim" Sep 6 00:00:28.475886 env[1322]: time="2025-09-06T00:00:28.475791495Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:00:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2848 runtime=io.containerd.runc.v2\n" Sep 6 00:00:29.079149 kubelet[2108]: E0906 00:00:29.079068 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:29.206061 env[1322]: time="2025-09-06T00:00:29.206020346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 6 00:00:31.077149 kubelet[2108]: E0906 00:00:31.077079 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:32.950566 kubelet[2108]: I0906 00:00:32.950488 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:00:32.950939 kubelet[2108]: E0906 00:00:32.950898 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:32.974000 audit[2872]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:32.974000 audit[2872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff552c1b0 a2=0 a3=1 items=0 ppid=2218 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:32.980201 kernel: audit: type=1325 audit(1757116832.974:284): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:32.980283 kernel: audit: type=1300 audit(1757116832.974:284): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff552c1b0 a2=0 a3=1 items=0 ppid=2218 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:32.980306 kernel: audit: type=1327 audit(1757116832.974:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:32.974000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:32.983000 audit[2872]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:32.983000 audit[2872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff552c1b0 a2=0 a3=1 items=0 ppid=2218 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:32.988571 kernel: audit: type=1325 audit(1757116832.983:285): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:32.988631 kernel: audit: type=1300 audit(1757116832.983:285): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff552c1b0 a2=0 a3=1 items=0 ppid=2218 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:32.988655 kernel: audit: type=1327 audit(1757116832.983:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:32.983000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:33.077425 kubelet[2108]: E0906 00:00:33.077376 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:33.218436 kubelet[2108]: E0906 00:00:33.218313 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:34.725822 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:60372.service. Sep 6 00:00:34.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.34:22-10.0.0.1:60372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:34.729571 kernel: audit: type=1130 audit(1757116834.725:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.34:22-10.0.0.1:60372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:34.767000 audit[2873]: USER_ACCT pid=2873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:34.768331 sshd[2873]: Accepted publickey for core from 10.0.0.1 port 60372 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:00:34.769360 sshd[2873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:00:34.768000 audit[2873]: CRED_ACQ pid=2873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:34.773390 kernel: audit: type=1101 audit(1757116834.767:287): pid=2873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:34.773457 kernel: audit: type=1103 audit(1757116834.768:288): pid=2873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:34.773479 kernel: audit: type=1006 audit(1757116834.768:289): pid=2873 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Sep 6 00:00:34.773131 systemd-logind[1310]: New session 8 of user core. Sep 6 00:00:34.773955 systemd[1]: Started session-8.scope. Sep 6 00:00:34.768000 audit[2873]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff77412a0 a2=3 a3=1 items=0 ppid=1 pid=2873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:34.768000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:34.777000 audit[2873]: USER_START pid=2873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:34.778000 audit[2876]: CRED_ACQ pid=2876 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:34.886456 sshd[2873]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:34.886000 audit[2873]: USER_END pid=2873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:34.886000 audit[2873]: CRED_DISP pid=2873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:34.889039 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:60372.service: Deactivated successfully. Sep 6 00:00:34.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.34:22-10.0.0.1:60372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:34.890095 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:00:34.890400 systemd-logind[1310]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:00:34.891108 systemd-logind[1310]: Removed session 8. Sep 6 00:00:35.077950 kubelet[2108]: E0906 00:00:35.077833 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:37.078631 kubelet[2108]: E0906 00:00:37.078164 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:39.077942 kubelet[2108]: E0906 00:00:39.077849 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:39.664605 env[1322]: time="2025-09-06T00:00:39.664558578Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:39.666628 env[1322]: time="2025-09-06T00:00:39.666600204Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:39.668496 env[1322]: time="2025-09-06T00:00:39.668466431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:39.670462 env[1322]: time="2025-09-06T00:00:39.670428817Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:39.671103 env[1322]: time="2025-09-06T00:00:39.671069092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 6 00:00:39.673549 env[1322]: time="2025-09-06T00:00:39.673500075Z" level=info msg="CreateContainer within sandbox \"3dd4bb6b660aacb72019221ae7bec6004aeb2ee7dcb59f69038e52e1413752fb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 6 00:00:39.684790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410134262.mount: Deactivated successfully. Sep 6 00:00:39.688524 env[1322]: time="2025-09-06T00:00:39.688484409Z" level=info msg="CreateContainer within sandbox \"3dd4bb6b660aacb72019221ae7bec6004aeb2ee7dcb59f69038e52e1413752fb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"49939f127a263d7601e183fb962d6f12ba80a9ecb76f394ccd522b58c7580e4c\"" Sep 6 00:00:39.689059 env[1322]: time="2025-09-06T00:00:39.689029565Z" level=info msg="StartContainer for \"49939f127a263d7601e183fb962d6f12ba80a9ecb76f394ccd522b58c7580e4c\"" Sep 6 00:00:39.885210 env[1322]: time="2025-09-06T00:00:39.885160015Z" level=info msg="StartContainer for \"49939f127a263d7601e183fb962d6f12ba80a9ecb76f394ccd522b58c7580e4c\" returns successfully" Sep 6 00:00:39.891348 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:00:39.891457 kernel: audit: type=1130 audit(1757116839.889:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.34:22-10.0.0.1:60376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:39.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.34:22-10.0.0.1:60376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:39.889831 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:60376.service. Sep 6 00:00:39.933000 audit[2922]: USER_ACCT pid=2922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:39.934987 sshd[2922]: Accepted publickey for core from 10.0.0.1 port 60376 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:00:39.937558 kernel: audit: type=1101 audit(1757116839.933:296): pid=2922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:39.937000 audit[2922]: CRED_ACQ pid=2922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:39.938698 sshd[2922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:00:39.942135 kernel: audit: type=1103 audit(1757116839.937:297): pid=2922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:39.942177 kernel: audit: type=1006 audit(1757116839.937:298): pid=2922 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 6 00:00:39.942203 kernel: audit: type=1300 audit(1757116839.937:298): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc105d20 a2=3 a3=1 items=0 ppid=1 pid=2922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:39.937000 audit[2922]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc105d20 a2=3 a3=1 items=0 ppid=1 pid=2922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:39.946073 kernel: audit: type=1327 audit(1757116839.937:298): proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:39.937000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:39.951089 systemd-logind[1310]: New session 9 of user core. Sep 6 00:00:39.951493 systemd[1]: Started session-9.scope. Sep 6 00:00:39.955000 audit[2922]: USER_START pid=2922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:39.958891 kernel: audit: type=1105 audit(1757116839.955:299): pid=2922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:39.958969 kernel: audit: type=1103 audit(1757116839.958:300): pid=2926 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:39.958000 audit[2926]: CRED_ACQ pid=2926 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:40.148951 sshd[2922]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:40.149000 audit[2922]: USER_END pid=2922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:40.151449 systemd-logind[1310]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:00:40.151593 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:60376.service: Deactivated successfully. Sep 6 00:00:40.149000 audit[2922]: CRED_DISP pid=2922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:40.152377 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:00:40.153167 systemd-logind[1310]: Removed session 9. Sep 6 00:00:40.155040 kernel: audit: type=1106 audit(1757116840.149:301): pid=2922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:40.155129 kernel: audit: type=1104 audit(1757116840.149:302): pid=2922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:40.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.34:22-10.0.0.1:60376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:40.298046 env[1322]: time="2025-09-06T00:00:40.297995570Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:00:40.314449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49939f127a263d7601e183fb962d6f12ba80a9ecb76f394ccd522b58c7580e4c-rootfs.mount: Deactivated successfully. Sep 6 00:00:40.320311 env[1322]: time="2025-09-06T00:00:40.320261775Z" level=info msg="shim disconnected" id=49939f127a263d7601e183fb962d6f12ba80a9ecb76f394ccd522b58c7580e4c Sep 6 00:00:40.320473 env[1322]: time="2025-09-06T00:00:40.320313215Z" level=warning msg="cleaning up after shim disconnected" id=49939f127a263d7601e183fb962d6f12ba80a9ecb76f394ccd522b58c7580e4c namespace=k8s.io Sep 6 00:00:40.320473 env[1322]: time="2025-09-06T00:00:40.320322134Z" level=info msg="cleaning up dead shim" Sep 6 00:00:40.327114 env[1322]: time="2025-09-06T00:00:40.327075168Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:00:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2964 runtime=io.containerd.runc.v2\n" Sep 6 00:00:40.343413 kubelet[2108]: I0906 00:00:40.343386 2108 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:00:40.469808 kubelet[2108]: I0906 00:00:40.469769 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b82a28a4-7ccf-49bb-8f82-e329e2c83546-goldmane-ca-bundle\") pod \"goldmane-7988f88666-k2cw7\" (UID: \"b82a28a4-7ccf-49bb-8f82-e329e2c83546\") " pod="calico-system/goldmane-7988f88666-k2cw7" Sep 6 00:00:40.470033 kubelet[2108]: I0906 00:00:40.470014 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91147e16-46a1-4693-89ba-b68a85115252-config-volume\") pod \"coredns-7c65d6cfc9-mvwsc\" (UID: \"91147e16-46a1-4693-89ba-b68a85115252\") " pod="kube-system/coredns-7c65d6cfc9-mvwsc" Sep 6 00:00:40.470167 kubelet[2108]: I0906 00:00:40.470130 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82a28a4-7ccf-49bb-8f82-e329e2c83546-config\") pod \"goldmane-7988f88666-k2cw7\" (UID: \"b82a28a4-7ccf-49bb-8f82-e329e2c83546\") " pod="calico-system/goldmane-7988f88666-k2cw7" Sep 6 00:00:40.470359 kubelet[2108]: I0906 00:00:40.470337 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdxfh\" (UniqueName: \"kubernetes.io/projected/91147e16-46a1-4693-89ba-b68a85115252-kube-api-access-qdxfh\") pod \"coredns-7c65d6cfc9-mvwsc\" (UID: \"91147e16-46a1-4693-89ba-b68a85115252\") " pod="kube-system/coredns-7c65d6cfc9-mvwsc" Sep 6 00:00:40.470460 kubelet[2108]: I0906 00:00:40.470445 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b82a28a4-7ccf-49bb-8f82-e329e2c83546-goldmane-key-pair\") pod \"goldmane-7988f88666-k2cw7\" (UID: \"b82a28a4-7ccf-49bb-8f82-e329e2c83546\") " pod="calico-system/goldmane-7988f88666-k2cw7" Sep 6 00:00:40.470566 kubelet[2108]: I0906 00:00:40.470533 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqwqj\" (UniqueName: \"kubernetes.io/projected/b82a28a4-7ccf-49bb-8f82-e329e2c83546-kube-api-access-qqwqj\") pod \"goldmane-7988f88666-k2cw7\" (UID: \"b82a28a4-7ccf-49bb-8f82-e329e2c83546\") " pod="calico-system/goldmane-7988f88666-k2cw7" Sep 6 00:00:40.571671 kubelet[2108]: I0906 00:00:40.571559 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsgfd\" (UniqueName: \"kubernetes.io/projected/39ede2cd-ddde-4eac-bd4f-184f0738c304-kube-api-access-nsgfd\") pod \"calico-kube-controllers-6f49f47fcf-n2r4d\" (UID: \"39ede2cd-ddde-4eac-bd4f-184f0738c304\") " pod="calico-system/calico-kube-controllers-6f49f47fcf-n2r4d" Sep 6 00:00:40.571873 kubelet[2108]: I0906 00:00:40.571851 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtv4s\" (UniqueName: \"kubernetes.io/projected/8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57-kube-api-access-xtv4s\") pod \"calico-apiserver-594cfdd89c-h4tb8\" (UID: \"8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57\") " pod="calico-apiserver/calico-apiserver-594cfdd89c-h4tb8" Sep 6 00:00:40.571975 kubelet[2108]: I0906 00:00:40.571960 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7r66\" (UniqueName: \"kubernetes.io/projected/d3ad0d4a-3293-484b-9672-41f544529dfe-kube-api-access-c7r66\") pod \"whisker-75d9c4dcb7-n9hzn\" (UID: \"d3ad0d4a-3293-484b-9672-41f544529dfe\") " pod="calico-system/whisker-75d9c4dcb7-n9hzn" Sep 6 00:00:40.572092 kubelet[2108]: I0906 00:00:40.572079 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3ad0d4a-3293-484b-9672-41f544529dfe-whisker-backend-key-pair\") pod \"whisker-75d9c4dcb7-n9hzn\" (UID: \"d3ad0d4a-3293-484b-9672-41f544529dfe\") " pod="calico-system/whisker-75d9c4dcb7-n9hzn" Sep 6 00:00:40.572325 kubelet[2108]: I0906 00:00:40.572290 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9f5bb4e-c6c9-4116-9894-6226c1ed909d-config-volume\") pod \"coredns-7c65d6cfc9-vcwrt\" (UID: \"a9f5bb4e-c6c9-4116-9894-6226c1ed909d\") " pod="kube-system/coredns-7c65d6cfc9-vcwrt" Sep 6 00:00:40.572392 kubelet[2108]: I0906 00:00:40.572330 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42v92\" (UniqueName: \"kubernetes.io/projected/ce984b9e-b1d5-41ed-b8a6-43f216d53a5a-kube-api-access-42v92\") pod \"calico-apiserver-594cfdd89c-t5f8l\" (UID: \"ce984b9e-b1d5-41ed-b8a6-43f216d53a5a\") " pod="calico-apiserver/calico-apiserver-594cfdd89c-t5f8l" Sep 6 00:00:40.572392 kubelet[2108]: I0906 00:00:40.572353 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57-calico-apiserver-certs\") pod \"calico-apiserver-594cfdd89c-h4tb8\" (UID: \"8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57\") " pod="calico-apiserver/calico-apiserver-594cfdd89c-h4tb8" Sep 6 00:00:40.572392 kubelet[2108]: I0906 00:00:40.572374 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hjfw\" (UniqueName: \"kubernetes.io/projected/a9f5bb4e-c6c9-4116-9894-6226c1ed909d-kube-api-access-2hjfw\") pod \"coredns-7c65d6cfc9-vcwrt\" (UID: \"a9f5bb4e-c6c9-4116-9894-6226c1ed909d\") " pod="kube-system/coredns-7c65d6cfc9-vcwrt" Sep 6 00:00:40.572471 kubelet[2108]: I0906 00:00:40.572393 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ad0d4a-3293-484b-9672-41f544529dfe-whisker-ca-bundle\") pod \"whisker-75d9c4dcb7-n9hzn\" (UID: \"d3ad0d4a-3293-484b-9672-41f544529dfe\") " pod="calico-system/whisker-75d9c4dcb7-n9hzn" Sep 6 00:00:40.572471 kubelet[2108]: I0906 00:00:40.572410 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39ede2cd-ddde-4eac-bd4f-184f0738c304-tigera-ca-bundle\") pod \"calico-kube-controllers-6f49f47fcf-n2r4d\" (UID: \"39ede2cd-ddde-4eac-bd4f-184f0738c304\") " pod="calico-system/calico-kube-controllers-6f49f47fcf-n2r4d" Sep 6 00:00:40.572471 kubelet[2108]: I0906 00:00:40.572426 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce984b9e-b1d5-41ed-b8a6-43f216d53a5a-calico-apiserver-certs\") pod \"calico-apiserver-594cfdd89c-t5f8l\" (UID: \"ce984b9e-b1d5-41ed-b8a6-43f216d53a5a\") " pod="calico-apiserver/calico-apiserver-594cfdd89c-t5f8l" Sep 6 00:00:40.673684 kubelet[2108]: E0906 00:00:40.673642 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:40.674457 env[1322]: time="2025-09-06T00:00:40.674409033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mvwsc,Uid:91147e16-46a1-4693-89ba-b68a85115252,Namespace:kube-system,Attempt:0,}" Sep 6 00:00:40.680842 env[1322]: time="2025-09-06T00:00:40.678488485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-k2cw7,Uid:b82a28a4-7ccf-49bb-8f82-e329e2c83546,Namespace:calico-system,Attempt:0,}" Sep 6 00:00:40.785114 env[1322]: time="2025-09-06T00:00:40.785030544Z" level=error msg="Failed to destroy network for sandbox \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:40.785425 env[1322]: time="2025-09-06T00:00:40.785393142Z" level=error msg="encountered an error cleaning up failed sandbox \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:40.785467 env[1322]: time="2025-09-06T00:00:40.785446981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mvwsc,Uid:91147e16-46a1-4693-89ba-b68a85115252,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:40.786212 env[1322]: time="2025-09-06T00:00:40.786161016Z" level=error msg="Failed to destroy network for sandbox \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:40.786522 kubelet[2108]: E0906 00:00:40.786481 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:40.786586 kubelet[2108]: E0906 00:00:40.786567 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mvwsc" Sep 6 00:00:40.786615 env[1322]: time="2025-09-06T00:00:40.786490214Z" level=error msg="encountered an error cleaning up failed sandbox \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:40.786615 env[1322]: time="2025-09-06T00:00:40.786562574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-k2cw7,Uid:b82a28a4-7ccf-49bb-8f82-e329e2c83546,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:40.786690 kubelet[2108]: E0906 00:00:40.786588 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mvwsc" Sep 6 00:00:40.786690 kubelet[2108]: E0906 00:00:40.786636 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mvwsc_kube-system(91147e16-46a1-4693-89ba-b68a85115252)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mvwsc_kube-system(91147e16-46a1-4693-89ba-b68a85115252)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mvwsc" podUID="91147e16-46a1-4693-89ba-b68a85115252" Sep 6 00:00:40.786942 kubelet[2108]: E0906 00:00:40.786921 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:40.787615 kubelet[2108]: E0906 00:00:40.786955 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-k2cw7" Sep 6 00:00:40.787615 kubelet[2108]: E0906 00:00:40.786970 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-k2cw7" Sep 6 00:00:40.787615 kubelet[2108]: E0906 00:00:40.786996 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-k2cw7_calico-system(b82a28a4-7ccf-49bb-8f82-e329e2c83546)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-k2cw7_calico-system(b82a28a4-7ccf-49bb-8f82-e329e2c83546)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-k2cw7" podUID="b82a28a4-7ccf-49bb-8f82-e329e2c83546" Sep 6 00:00:40.978646 env[1322]: time="2025-09-06T00:00:40.978598439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cfdd89c-h4tb8,Uid:8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:00:40.979262 env[1322]: time="2025-09-06T00:00:40.979230034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d9c4dcb7-n9hzn,Uid:d3ad0d4a-3293-484b-9672-41f544529dfe,Namespace:calico-system,Attempt:0,}" Sep 6 00:00:40.981947 env[1322]: time="2025-09-06T00:00:40.981918656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f49f47fcf-n2r4d,Uid:39ede2cd-ddde-4eac-bd4f-184f0738c304,Namespace:calico-system,Attempt:0,}" Sep 6 00:00:40.984772 env[1322]: time="2025-09-06T00:00:40.984485478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cfdd89c-t5f8l,Uid:ce984b9e-b1d5-41ed-b8a6-43f216d53a5a,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:00:40.984894 kubelet[2108]: E0906 00:00:40.984529 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:40.985006 env[1322]: time="2025-09-06T00:00:40.984976394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vcwrt,Uid:a9f5bb4e-c6c9-4116-9894-6226c1ed909d,Namespace:kube-system,Attempt:0,}" Sep 6 00:00:41.082710 env[1322]: time="2025-09-06T00:00:41.082667166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tzrz,Uid:725f1740-cbad-4998-8e87-ef45cb66da35,Namespace:calico-system,Attempt:0,}" Sep 6 00:00:41.083432 env[1322]: time="2025-09-06T00:00:41.083378401Z" level=error msg="Failed to destroy network for sandbox \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.083841 env[1322]: time="2025-09-06T00:00:41.083791518Z" level=error msg="encountered an error cleaning up failed sandbox \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.083907 env[1322]: time="2025-09-06T00:00:41.083855798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cfdd89c-h4tb8,Uid:8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.096373 kubelet[2108]: E0906 00:00:41.096311 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.096373 kubelet[2108]: E0906 00:00:41.096372 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-594cfdd89c-h4tb8" Sep 6 00:00:41.096580 kubelet[2108]: E0906 00:00:41.096392 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-594cfdd89c-h4tb8" Sep 6 00:00:41.096580 kubelet[2108]: E0906 00:00:41.096434 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-594cfdd89c-h4tb8_calico-apiserver(8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-594cfdd89c-h4tb8_calico-apiserver(8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-594cfdd89c-h4tb8" podUID="8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57" Sep 6 00:00:41.096755 env[1322]: time="2025-09-06T00:00:41.096284353Z" level=error msg="Failed to destroy network for sandbox \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.097277 env[1322]: time="2025-09-06T00:00:41.097234226Z" level=error msg="encountered an error cleaning up failed sandbox \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.100463 env[1322]: time="2025-09-06T00:00:41.100397165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cfdd89c-t5f8l,Uid:ce984b9e-b1d5-41ed-b8a6-43f216d53a5a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.101191 kubelet[2108]: E0906 00:00:41.101000 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.101191 kubelet[2108]: E0906 00:00:41.101066 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-594cfdd89c-t5f8l" Sep 6 00:00:41.101191 kubelet[2108]: E0906 00:00:41.101085 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-594cfdd89c-t5f8l" Sep 6 00:00:41.102338 kubelet[2108]: E0906 00:00:41.101366 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-594cfdd89c-t5f8l_calico-apiserver(ce984b9e-b1d5-41ed-b8a6-43f216d53a5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-594cfdd89c-t5f8l_calico-apiserver(ce984b9e-b1d5-41ed-b8a6-43f216d53a5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-594cfdd89c-t5f8l" podUID="ce984b9e-b1d5-41ed-b8a6-43f216d53a5a" Sep 6 00:00:41.108391 env[1322]: time="2025-09-06T00:00:41.108340630Z" level=error msg="Failed to destroy network for sandbox \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.108574 env[1322]: time="2025-09-06T00:00:41.108512389Z" level=error msg="Failed to destroy network for sandbox \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.109189 env[1322]: time="2025-09-06T00:00:41.108988266Z" level=error msg="encountered an error cleaning up failed sandbox \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.109628 env[1322]: time="2025-09-06T00:00:41.109310984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vcwrt,Uid:a9f5bb4e-c6c9-4116-9894-6226c1ed909d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.110329 kubelet[2108]: E0906 00:00:41.109924 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.110329 kubelet[2108]: E0906 00:00:41.109977 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vcwrt" Sep 6 00:00:41.110329 kubelet[2108]: E0906 00:00:41.109994 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vcwrt" Sep 6 00:00:41.110503 kubelet[2108]: E0906 00:00:41.110033 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vcwrt_kube-system(a9f5bb4e-c6c9-4116-9894-6226c1ed909d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vcwrt_kube-system(a9f5bb4e-c6c9-4116-9894-6226c1ed909d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vcwrt" podUID="a9f5bb4e-c6c9-4116-9894-6226c1ed909d" Sep 6 00:00:41.112189 env[1322]: time="2025-09-06T00:00:41.112135645Z" level=error msg="encountered an error cleaning up failed sandbox \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.113587 env[1322]: time="2025-09-06T00:00:41.113457355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d9c4dcb7-n9hzn,Uid:d3ad0d4a-3293-484b-9672-41f544529dfe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.113970 kubelet[2108]: E0906 00:00:41.113799 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.113970 kubelet[2108]: E0906 00:00:41.113857 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d9c4dcb7-n9hzn" Sep 6 00:00:41.113970 kubelet[2108]: E0906 00:00:41.113877 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d9c4dcb7-n9hzn" Sep 6 00:00:41.114104 kubelet[2108]: E0906 00:00:41.113914 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75d9c4dcb7-n9hzn_calico-system(d3ad0d4a-3293-484b-9672-41f544529dfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75d9c4dcb7-n9hzn_calico-system(d3ad0d4a-3293-484b-9672-41f544529dfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d9c4dcb7-n9hzn" podUID="d3ad0d4a-3293-484b-9672-41f544529dfe" Sep 6 00:00:41.120306 env[1322]: time="2025-09-06T00:00:41.120261829Z" level=error msg="Failed to destroy network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.120648 env[1322]: time="2025-09-06T00:00:41.120618427Z" level=error msg="encountered an error cleaning up failed sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.120694 env[1322]: time="2025-09-06T00:00:41.120670506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f49f47fcf-n2r4d,Uid:39ede2cd-ddde-4eac-bd4f-184f0738c304,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.121122 kubelet[2108]: E0906 00:00:41.120844 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.121122 kubelet[2108]: E0906 00:00:41.120889 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f49f47fcf-n2r4d" Sep 6 00:00:41.121122 kubelet[2108]: E0906 00:00:41.120907 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f49f47fcf-n2r4d" Sep 6 00:00:41.121283 kubelet[2108]: E0906 00:00:41.120939 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f49f47fcf-n2r4d_calico-system(39ede2cd-ddde-4eac-bd4f-184f0738c304)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f49f47fcf-n2r4d_calico-system(39ede2cd-ddde-4eac-bd4f-184f0738c304)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f49f47fcf-n2r4d" podUID="39ede2cd-ddde-4eac-bd4f-184f0738c304" Sep 6 00:00:41.153705 env[1322]: time="2025-09-06T00:00:41.153647441Z" level=error msg="Failed to destroy network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.154187 env[1322]: time="2025-09-06T00:00:41.154151478Z" level=error msg="encountered an error cleaning up failed sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.154303 env[1322]: time="2025-09-06T00:00:41.154275997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tzrz,Uid:725f1740-cbad-4998-8e87-ef45cb66da35,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.154934 kubelet[2108]: E0906 00:00:41.154579 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.154934 kubelet[2108]: E0906 00:00:41.154641 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7tzrz" Sep 6 00:00:41.154934 kubelet[2108]: E0906 00:00:41.154660 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7tzrz" Sep 6 00:00:41.155078 kubelet[2108]: E0906 00:00:41.154698 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7tzrz_calico-system(725f1740-cbad-4998-8e87-ef45cb66da35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7tzrz_calico-system(725f1740-cbad-4998-8e87-ef45cb66da35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:41.242157 kubelet[2108]: I0906 00:00:41.241185 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" Sep 6 00:00:41.243344 env[1322]: time="2025-09-06T00:00:41.243297949Z" level=info msg="StopPodSandbox for \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\"" Sep 6 00:00:41.243448 kubelet[2108]: I0906 00:00:41.243345 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" Sep 6 00:00:41.243967 env[1322]: time="2025-09-06T00:00:41.243931265Z" level=info msg="StopPodSandbox for \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\"" Sep 6 00:00:41.248256 env[1322]: time="2025-09-06T00:00:41.248185556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 6 00:00:41.248728 kubelet[2108]: I0906 00:00:41.248618 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Sep 6 00:00:41.249337 env[1322]: time="2025-09-06T00:00:41.249310828Z" level=info msg="StopPodSandbox for \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\"" Sep 6 00:00:41.253556 kubelet[2108]: I0906 00:00:41.252871 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" Sep 6 00:00:41.253666 env[1322]: time="2025-09-06T00:00:41.253388880Z" level=info msg="StopPodSandbox for \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\"" Sep 6 00:00:41.254941 kubelet[2108]: I0906 00:00:41.254902 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" Sep 6 00:00:41.255421 env[1322]: time="2025-09-06T00:00:41.255387907Z" level=info msg="StopPodSandbox for \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\"" Sep 6 00:00:41.256343 kubelet[2108]: I0906 00:00:41.256308 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Sep 6 00:00:41.256942 env[1322]: time="2025-09-06T00:00:41.256914096Z" level=info msg="StopPodSandbox for \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\"" Sep 6 00:00:41.257988 kubelet[2108]: I0906 00:00:41.257644 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" Sep 6 00:00:41.258210 env[1322]: time="2025-09-06T00:00:41.258186088Z" level=info msg="StopPodSandbox for \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\"" Sep 6 00:00:41.259503 kubelet[2108]: I0906 00:00:41.259194 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" Sep 6 00:00:41.259717 env[1322]: time="2025-09-06T00:00:41.259693837Z" level=info msg="StopPodSandbox for \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\"" Sep 6 00:00:41.340320 env[1322]: time="2025-09-06T00:00:41.340256408Z" level=error msg="StopPodSandbox for \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\" failed" error="failed to destroy network for sandbox \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.340564 kubelet[2108]: E0906 00:00:41.340513 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" Sep 6 00:00:41.340641 kubelet[2108]: E0906 00:00:41.340585 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f"} Sep 6 00:00:41.340670 kubelet[2108]: E0906 00:00:41.340640 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3ad0d4a-3293-484b-9672-41f544529dfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:41.340732 kubelet[2108]: E0906 00:00:41.340690 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3ad0d4a-3293-484b-9672-41f544529dfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d9c4dcb7-n9hzn" podUID="d3ad0d4a-3293-484b-9672-41f544529dfe" Sep 6 00:00:41.348043 env[1322]: time="2025-09-06T00:00:41.347985915Z" level=error msg="StopPodSandbox for \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\" failed" error="failed to destroy network for sandbox \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.348431 kubelet[2108]: E0906 00:00:41.348384 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" Sep 6 00:00:41.348779 kubelet[2108]: E0906 00:00:41.348442 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f"} Sep 6 00:00:41.348779 kubelet[2108]: E0906 00:00:41.348479 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b82a28a4-7ccf-49bb-8f82-e329e2c83546\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:41.348779 kubelet[2108]: E0906 00:00:41.348502 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b82a28a4-7ccf-49bb-8f82-e329e2c83546\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-k2cw7" podUID="b82a28a4-7ccf-49bb-8f82-e329e2c83546" Sep 6 00:00:41.349246 env[1322]: time="2025-09-06T00:00:41.349196707Z" level=error msg="StopPodSandbox for \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\" failed" error="failed to destroy network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.349549 kubelet[2108]: E0906 00:00:41.349510 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Sep 6 00:00:41.349623 kubelet[2108]: E0906 00:00:41.349558 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832"} Sep 6 00:00:41.349623 kubelet[2108]: E0906 00:00:41.349593 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"725f1740-cbad-4998-8e87-ef45cb66da35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:41.349623 kubelet[2108]: E0906 00:00:41.349611 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"725f1740-cbad-4998-8e87-ef45cb66da35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:41.351690 env[1322]: time="2025-09-06T00:00:41.351651410Z" level=error msg="StopPodSandbox for \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\" failed" error="failed to destroy network for sandbox \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.351996 kubelet[2108]: E0906 00:00:41.351963 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" Sep 6 00:00:41.352076 kubelet[2108]: E0906 00:00:41.352002 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e"} Sep 6 00:00:41.352076 kubelet[2108]: E0906 00:00:41.352026 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91147e16-46a1-4693-89ba-b68a85115252\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:41.352076 kubelet[2108]: E0906 00:00:41.352049 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91147e16-46a1-4693-89ba-b68a85115252\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mvwsc" podUID="91147e16-46a1-4693-89ba-b68a85115252" Sep 6 00:00:41.363046 env[1322]: time="2025-09-06T00:00:41.363000132Z" level=error msg="StopPodSandbox for \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\" failed" error="failed to destroy network for sandbox \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.363437 kubelet[2108]: E0906 00:00:41.363401 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" Sep 6 00:00:41.363514 kubelet[2108]: E0906 00:00:41.363449 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220"} Sep 6 00:00:41.363514 kubelet[2108]: E0906 00:00:41.363478 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce984b9e-b1d5-41ed-b8a6-43f216d53a5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:41.363514 kubelet[2108]: E0906 00:00:41.363504 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce984b9e-b1d5-41ed-b8a6-43f216d53a5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-594cfdd89c-t5f8l" podUID="ce984b9e-b1d5-41ed-b8a6-43f216d53a5a" Sep 6 00:00:41.366059 env[1322]: time="2025-09-06T00:00:41.366020512Z" level=error msg="StopPodSandbox for \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\" failed" error="failed to destroy network for sandbox \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.366346 kubelet[2108]: E0906 00:00:41.366312 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" Sep 6 00:00:41.366413 kubelet[2108]: E0906 00:00:41.366352 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78"} Sep 6 00:00:41.366413 kubelet[2108]: E0906 00:00:41.366379 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9f5bb4e-c6c9-4116-9894-6226c1ed909d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:41.366413 kubelet[2108]: E0906 00:00:41.366396 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9f5bb4e-c6c9-4116-9894-6226c1ed909d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vcwrt" podUID="a9f5bb4e-c6c9-4116-9894-6226c1ed909d" Sep 6 00:00:41.371470 env[1322]: time="2025-09-06T00:00:41.371431195Z" level=error msg="StopPodSandbox for \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\" failed" error="failed to destroy network for sandbox \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.371898 kubelet[2108]: E0906 00:00:41.371768 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" Sep 6 00:00:41.371898 kubelet[2108]: E0906 00:00:41.371802 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4"} Sep 6 00:00:41.371898 kubelet[2108]: E0906 00:00:41.371848 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:41.371898 kubelet[2108]: E0906 00:00:41.371872 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-594cfdd89c-h4tb8" podUID="8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57" Sep 6 00:00:41.373585 env[1322]: time="2025-09-06T00:00:41.373549580Z" level=error msg="StopPodSandbox for \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\" failed" error="failed to destroy network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:41.373819 kubelet[2108]: E0906 00:00:41.373779 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Sep 6 00:00:41.373819 kubelet[2108]: E0906 00:00:41.373815 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3"} Sep 6 00:00:41.373968 kubelet[2108]: E0906 00:00:41.373863 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39ede2cd-ddde-4eac-bd4f-184f0738c304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:41.373968 kubelet[2108]: E0906 00:00:41.373882 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39ede2cd-ddde-4eac-bd4f-184f0738c304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f49f47fcf-n2r4d" podUID="39ede2cd-ddde-4eac-bd4f-184f0738c304" Sep 6 00:00:41.685418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e-shm.mount: Deactivated successfully. Sep 6 00:00:45.151075 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:47068.service. Sep 6 00:00:45.154880 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:00:45.154955 kernel: audit: type=1130 audit(1757116845.150:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.34:22-10.0.0.1:47068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:45.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.34:22-10.0.0.1:47068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:45.198848 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 47068 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:00:45.198000 audit[3397]: USER_ACCT pid=3397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.199000 audit[3397]: CRED_ACQ pid=3397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.204035 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:00:45.208105 kernel: audit: type=1101 audit(1757116845.198:305): pid=3397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.208166 kernel: audit: type=1103 audit(1757116845.199:306): pid=3397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.208187 kernel: audit: type=1006 audit(1757116845.199:307): pid=3397 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 6 00:00:45.207889 systemd-logind[1310]: New session 10 of user core. Sep 6 00:00:45.208653 systemd[1]: Started session-10.scope. Sep 6 00:00:45.199000 audit[3397]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebc6eef0 a2=3 a3=1 items=0 ppid=1 pid=3397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:45.211949 kernel: audit: type=1300 audit(1757116845.199:307): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebc6eef0 a2=3 a3=1 items=0 ppid=1 pid=3397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:45.212003 kernel: audit: type=1327 audit(1757116845.199:307): proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:45.199000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:45.215000 audit[3397]: USER_START pid=3397 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.219763 kernel: audit: type=1105 audit(1757116845.215:308): pid=3397 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.219000 audit[3400]: CRED_ACQ pid=3400 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.222883 kernel: audit: type=1103 audit(1757116845.219:309): pid=3400 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.345453 sshd[3397]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:45.345000 audit[3397]: USER_END pid=3397 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.348593 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:47068.service: Deactivated successfully. Sep 6 00:00:45.346000 audit[3397]: CRED_DISP pid=3397 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.349821 systemd-logind[1310]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:00:45.349864 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:00:45.350729 systemd-logind[1310]: Removed session 10. Sep 6 00:00:45.351880 kernel: audit: type=1106 audit(1757116845.345:310): pid=3397 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.351932 kernel: audit: type=1104 audit(1757116845.346:311): pid=3397 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:45.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.34:22-10.0.0.1:47068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:50.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.34:22-10.0.0.1:59202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:50.348594 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:59202.service. Sep 6 00:00:50.351467 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:00:50.351561 kernel: audit: type=1130 audit(1757116850.348:313): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.34:22-10.0.0.1:59202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:50.556000 audit[3415]: USER_ACCT pid=3415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.558651 sshd[3415]: Accepted publickey for core from 10.0.0.1 port 59202 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:00:50.559000 audit[3415]: CRED_ACQ pid=3415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.560270 sshd[3415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:00:50.562316 kernel: audit: type=1101 audit(1757116850.556:314): pid=3415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.562374 kernel: audit: type=1103 audit(1757116850.559:315): pid=3415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.562402 kernel: audit: type=1006 audit(1757116850.559:316): pid=3415 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Sep 6 00:00:50.559000 audit[3415]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdca08740 a2=3 a3=1 items=0 ppid=1 pid=3415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:50.567128 kernel: audit: type=1300 audit(1757116850.559:316): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdca08740 a2=3 a3=1 items=0 ppid=1 pid=3415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:50.567183 kernel: audit: type=1327 audit(1757116850.559:316): proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:50.559000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:50.571799 systemd-logind[1310]: New session 11 of user core. Sep 6 00:00:50.573007 systemd[1]: Started session-11.scope. Sep 6 00:00:50.577000 audit[3415]: USER_START pid=3415 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.581000 audit[3418]: CRED_ACQ pid=3418 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.584877 kernel: audit: type=1105 audit(1757116850.577:317): pid=3415 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.584992 kernel: audit: type=1103 audit(1757116850.581:318): pid=3418 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.763088 sshd[3415]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:50.763000 audit[3415]: USER_END pid=3415 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.765330 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:59202.service: Deactivated successfully. Sep 6 00:00:50.766136 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:00:50.763000 audit[3415]: CRED_DISP pid=3415 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.770487 kernel: audit: type=1106 audit(1757116850.763:319): pid=3415 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.770578 kernel: audit: type=1104 audit(1757116850.763:320): pid=3415 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:50.770582 systemd-logind[1310]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:00:50.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.34:22-10.0.0.1:59202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:50.771314 systemd-logind[1310]: Removed session 11. Sep 6 00:00:53.078720 env[1322]: time="2025-09-06T00:00:53.078632620Z" level=info msg="StopPodSandbox for \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\"" Sep 6 00:00:53.136419 env[1322]: time="2025-09-06T00:00:53.136355525Z" level=error msg="StopPodSandbox for \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\" failed" error="failed to destroy network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:53.136667 kubelet[2108]: E0906 00:00:53.136614 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Sep 6 00:00:53.136978 kubelet[2108]: E0906 00:00:53.136682 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832"} Sep 6 00:00:53.136978 kubelet[2108]: E0906 00:00:53.136725 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"725f1740-cbad-4998-8e87-ef45cb66da35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:53.136978 kubelet[2108]: E0906 00:00:53.136749 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"725f1740-cbad-4998-8e87-ef45cb66da35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7tzrz" podUID="725f1740-cbad-4998-8e87-ef45cb66da35" Sep 6 00:00:53.502002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119161044.mount: Deactivated successfully. Sep 6 00:00:53.828889 env[1322]: time="2025-09-06T00:00:53.828775868Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:53.832552 env[1322]: time="2025-09-06T00:00:53.832452527Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:53.834708 env[1322]: time="2025-09-06T00:00:53.834597075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:53.836329 env[1322]: time="2025-09-06T00:00:53.836249545Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:00:53.836623 env[1322]: time="2025-09-06T00:00:53.836587543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 6 00:00:53.864525 env[1322]: time="2025-09-06T00:00:53.864481661Z" level=info msg="CreateContainer within sandbox \"3dd4bb6b660aacb72019221ae7bec6004aeb2ee7dcb59f69038e52e1413752fb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 6 00:00:53.891320 env[1322]: time="2025-09-06T00:00:53.891120747Z" level=info msg="CreateContainer within sandbox \"3dd4bb6b660aacb72019221ae7bec6004aeb2ee7dcb59f69038e52e1413752fb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"48e499142396eb0819d1d2fb261e2fff4b55f4937f49862f22c2d1ef1eaa1050\"" Sep 6 00:00:53.894463 env[1322]: time="2025-09-06T00:00:53.892346979Z" level=info msg="StartContainer for \"48e499142396eb0819d1d2fb261e2fff4b55f4937f49862f22c2d1ef1eaa1050\"" Sep 6 00:00:53.971708 env[1322]: time="2025-09-06T00:00:53.971639239Z" level=info msg="StartContainer for \"48e499142396eb0819d1d2fb261e2fff4b55f4937f49862f22c2d1ef1eaa1050\" returns successfully" Sep 6 00:00:54.084446 env[1322]: time="2025-09-06T00:00:54.084264031Z" level=info msg="StopPodSandbox for \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\"" Sep 6 00:00:54.120813 env[1322]: time="2025-09-06T00:00:54.120750981Z" level=error msg="StopPodSandbox for \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\" failed" error="failed to destroy network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:00:54.121124 kubelet[2108]: E0906 00:00:54.120963 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Sep 6 00:00:54.121124 kubelet[2108]: E0906 00:00:54.121025 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3"} Sep 6 00:00:54.121124 kubelet[2108]: E0906 00:00:54.121062 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39ede2cd-ddde-4eac-bd4f-184f0738c304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:00:54.121124 kubelet[2108]: E0906 00:00:54.121083 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39ede2cd-ddde-4eac-bd4f-184f0738c304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f49f47fcf-n2r4d" podUID="39ede2cd-ddde-4eac-bd4f-184f0738c304" Sep 6 00:00:54.137490 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 6 00:00:54.137625 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 6 00:00:54.261359 env[1322]: time="2025-09-06T00:00:54.261312213Z" level=info msg="StopPodSandbox for \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\"" Sep 6 00:00:54.309464 kubelet[2108]: I0906 00:00:54.309407 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2cqgr" podStartSLOduration=1.128893328 podStartE2EDuration="45.309391057s" podCreationTimestamp="2025-09-06 00:00:09 +0000 UTC" firstStartedPulling="2025-09-06 00:00:09.657205687 +0000 UTC m=+19.648643545" lastFinishedPulling="2025-09-06 00:00:53.837703416 +0000 UTC m=+63.829141274" observedRunningTime="2025-09-06 00:00:54.3089157 +0000 UTC m=+64.300353638" watchObservedRunningTime="2025-09-06 00:00:54.309391057 +0000 UTC m=+64.300828915" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.370 [INFO][3536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.371 [INFO][3536] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" iface="eth0" netns="/var/run/netns/cni-ed8b0dd4-2691-c54f-caad-a1ac1bbed1a4" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.372 [INFO][3536] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" iface="eth0" netns="/var/run/netns/cni-ed8b0dd4-2691-c54f-caad-a1ac1bbed1a4" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.373 [INFO][3536] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" iface="eth0" netns="/var/run/netns/cni-ed8b0dd4-2691-c54f-caad-a1ac1bbed1a4" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.373 [INFO][3536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.373 [INFO][3536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.466 [INFO][3568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" HandleID="k8s-pod-network.cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" Workload="localhost-k8s-whisker--75d9c4dcb7--n9hzn-eth0" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.466 [INFO][3568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.467 [INFO][3568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.478 [WARNING][3568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" HandleID="k8s-pod-network.cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" Workload="localhost-k8s-whisker--75d9c4dcb7--n9hzn-eth0" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.478 [INFO][3568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" HandleID="k8s-pod-network.cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" Workload="localhost-k8s-whisker--75d9c4dcb7--n9hzn-eth0" Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.481 [INFO][3568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:54.488223 env[1322]: 2025-09-06 00:00:54.485 [INFO][3536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f" Sep 6 00:00:54.488822 env[1322]: time="2025-09-06T00:00:54.488790066Z" level=info msg="TearDown network for sandbox \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\" successfully" Sep 6 00:00:54.488902 env[1322]: time="2025-09-06T00:00:54.488885905Z" level=info msg="StopPodSandbox for \"cb138c3e57d885094db66fa9b6c8be6f9d3f5eb48256a35160282f274083a55f\" returns successfully" Sep 6 00:00:54.502859 systemd[1]: run-netns-cni\x2ded8b0dd4\x2d2691\x2dc54f\x2dcaad\x2da1ac1bbed1a4.mount: Deactivated successfully. Sep 6 00:00:54.584094 kubelet[2108]: I0906 00:00:54.584019 2108 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7r66\" (UniqueName: \"kubernetes.io/projected/d3ad0d4a-3293-484b-9672-41f544529dfe-kube-api-access-c7r66\") pod \"d3ad0d4a-3293-484b-9672-41f544529dfe\" (UID: \"d3ad0d4a-3293-484b-9672-41f544529dfe\") " Sep 6 00:00:54.584094 kubelet[2108]: I0906 00:00:54.584071 2108 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3ad0d4a-3293-484b-9672-41f544529dfe-whisker-backend-key-pair\") pod \"d3ad0d4a-3293-484b-9672-41f544529dfe\" (UID: \"d3ad0d4a-3293-484b-9672-41f544529dfe\") " Sep 6 00:00:54.584094 kubelet[2108]: I0906 00:00:54.584091 2108 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ad0d4a-3293-484b-9672-41f544529dfe-whisker-ca-bundle\") pod \"d3ad0d4a-3293-484b-9672-41f544529dfe\" (UID: \"d3ad0d4a-3293-484b-9672-41f544529dfe\") " Sep 6 00:00:54.586579 kubelet[2108]: I0906 00:00:54.586504 2108 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ad0d4a-3293-484b-9672-41f544529dfe-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d3ad0d4a-3293-484b-9672-41f544529dfe" (UID: "d3ad0d4a-3293-484b-9672-41f544529dfe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:00:54.590293 systemd[1]: var-lib-kubelet-pods-d3ad0d4a\x2d3293\x2d484b\x2d9672\x2d41f544529dfe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc7r66.mount: Deactivated successfully. Sep 6 00:00:54.592867 systemd[1]: var-lib-kubelet-pods-d3ad0d4a\x2d3293\x2d484b\x2d9672\x2d41f544529dfe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 6 00:00:54.594965 kubelet[2108]: I0906 00:00:54.594142 2108 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ad0d4a-3293-484b-9672-41f544529dfe-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d3ad0d4a-3293-484b-9672-41f544529dfe" (UID: "d3ad0d4a-3293-484b-9672-41f544529dfe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:00:54.594965 kubelet[2108]: I0906 00:00:54.594146 2108 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ad0d4a-3293-484b-9672-41f544529dfe-kube-api-access-c7r66" (OuterVolumeSpecName: "kube-api-access-c7r66") pod "d3ad0d4a-3293-484b-9672-41f544529dfe" (UID: "d3ad0d4a-3293-484b-9672-41f544529dfe"). InnerVolumeSpecName "kube-api-access-c7r66". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:00:54.684529 kubelet[2108]: I0906 00:00:54.684487 2108 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7r66\" (UniqueName: \"kubernetes.io/projected/d3ad0d4a-3293-484b-9672-41f544529dfe-kube-api-access-c7r66\") on node \"localhost\" DevicePath \"\"" Sep 6 00:00:54.684757 kubelet[2108]: I0906 00:00:54.684743 2108 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3ad0d4a-3293-484b-9672-41f544529dfe-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 6 00:00:54.684820 kubelet[2108]: I0906 00:00:54.684810 2108 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ad0d4a-3293-484b-9672-41f544529dfe-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 6 00:00:55.079784 env[1322]: time="2025-09-06T00:00:55.079718234Z" level=info msg="StopPodSandbox for \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\"" Sep 6 00:00:55.080147 env[1322]: time="2025-09-06T00:00:55.080121511Z" level=info msg="StopPodSandbox for \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\"" Sep 6 00:00:55.081465 env[1322]: time="2025-09-06T00:00:55.080562869Z" level=info msg="StopPodSandbox for \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\"" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.147 [INFO][3623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.148 [INFO][3623] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" iface="eth0" netns="/var/run/netns/cni-b548e30a-37f7-5def-24b3-50ce3b896300" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.148 [INFO][3623] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" iface="eth0" netns="/var/run/netns/cni-b548e30a-37f7-5def-24b3-50ce3b896300" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.148 [INFO][3623] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" iface="eth0" netns="/var/run/netns/cni-b548e30a-37f7-5def-24b3-50ce3b896300" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.148 [INFO][3623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.148 [INFO][3623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.178 [INFO][3644] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" HandleID="k8s-pod-network.4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" Workload="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.178 [INFO][3644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.178 [INFO][3644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.192 [WARNING][3644] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" HandleID="k8s-pod-network.4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" Workload="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.192 [INFO][3644] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" HandleID="k8s-pod-network.4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" Workload="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.194 [INFO][3644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:55.202606 env[1322]: 2025-09-06 00:00:55.200 [INFO][3623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220" Sep 6 00:00:55.203840 env[1322]: time="2025-09-06T00:00:55.202750453Z" level=info msg="TearDown network for sandbox \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\" successfully" Sep 6 00:00:55.203840 env[1322]: time="2025-09-06T00:00:55.202785853Z" level=info msg="StopPodSandbox for \"4d0b3736574087ec912a0518e566bd814031f32f6e494126d93d83760b571220\" returns successfully" Sep 6 00:00:55.204868 systemd[1]: run-netns-cni\x2db548e30a\x2d37f7\x2d5def\x2d24b3\x2d50ce3b896300.mount: Deactivated successfully. Sep 6 00:00:55.206965 env[1322]: time="2025-09-06T00:00:55.206928549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cfdd89c-t5f8l,Uid:ce984b9e-b1d5-41ed-b8a6-43f216d53a5a,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.152 [INFO][3621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.152 [INFO][3621] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" iface="eth0" netns="/var/run/netns/cni-8bac62e8-6b2e-fa10-af75-8b815081457b" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.152 [INFO][3621] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" iface="eth0" netns="/var/run/netns/cni-8bac62e8-6b2e-fa10-af75-8b815081457b" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.153 [INFO][3621] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" iface="eth0" netns="/var/run/netns/cni-8bac62e8-6b2e-fa10-af75-8b815081457b" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.153 [INFO][3621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.153 [INFO][3621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.180 [INFO][3650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" HandleID="k8s-pod-network.03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" Workload="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.180 [INFO][3650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.194 [INFO][3650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.205 [WARNING][3650] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" HandleID="k8s-pod-network.03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" Workload="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.205 [INFO][3650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" HandleID="k8s-pod-network.03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" Workload="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.206 [INFO][3650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:55.216652 env[1322]: 2025-09-06 00:00:55.213 [INFO][3621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f" Sep 6 00:00:55.218819 systemd[1]: run-netns-cni\x2d8bac62e8\x2d6b2e\x2dfa10\x2daf75\x2d8b815081457b.mount: Deactivated successfully. Sep 6 00:00:55.219904 env[1322]: time="2025-09-06T00:00:55.219677077Z" level=info msg="TearDown network for sandbox \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\" successfully" Sep 6 00:00:55.219904 env[1322]: time="2025-09-06T00:00:55.219726036Z" level=info msg="StopPodSandbox for \"03b92bad40dd68ef33e4b76f8caa78202add14a55f2f3a35c665856f8569fa5f\" returns successfully" Sep 6 00:00:55.221343 env[1322]: time="2025-09-06T00:00:55.221308747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-k2cw7,Uid:b82a28a4-7ccf-49bb-8f82-e329e2c83546,Namespace:calico-system,Attempt:1,}" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.160 [INFO][3622] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.160 [INFO][3622] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" iface="eth0" netns="/var/run/netns/cni-88e9fc1c-b173-0cca-daeb-b23f3574338c" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.160 [INFO][3622] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" iface="eth0" netns="/var/run/netns/cni-88e9fc1c-b173-0cca-daeb-b23f3574338c" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.160 [INFO][3622] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" iface="eth0" netns="/var/run/netns/cni-88e9fc1c-b173-0cca-daeb-b23f3574338c" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.160 [INFO][3622] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.160 [INFO][3622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.190 [INFO][3657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" HandleID="k8s-pod-network.dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" Workload="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.190 [INFO][3657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.206 [INFO][3657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.219 [WARNING][3657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" HandleID="k8s-pod-network.dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" Workload="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.220 [INFO][3657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" HandleID="k8s-pod-network.dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" Workload="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.222 [INFO][3657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:55.227534 env[1322]: 2025-09-06 00:00:55.225 [INFO][3622] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e" Sep 6 00:00:55.227949 env[1322]: time="2025-09-06T00:00:55.227657951Z" level=info msg="TearDown network for sandbox \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\" successfully" Sep 6 00:00:55.227949 env[1322]: time="2025-09-06T00:00:55.227690271Z" level=info msg="StopPodSandbox for \"dfec5b381ab58c8434c593e59523e9ffa70e7d4c00732cd806e93d6058ac7c5e\" returns successfully" Sep 6 00:00:55.228231 kubelet[2108]: E0906 00:00:55.228206 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:55.228915 env[1322]: time="2025-09-06T00:00:55.228875424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mvwsc,Uid:91147e16-46a1-4693-89ba-b68a85115252,Namespace:kube-system,Attempt:1,}" Sep 6 00:00:55.439299 systemd-networkd[1097]: calif4f33687637: Link UP Sep 6 00:00:55.442501 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:00:55.442585 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif4f33687637: link becomes ready Sep 6 00:00:55.442466 systemd-networkd[1097]: calif4f33687637: Gained carrier Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.267 [INFO][3679] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.282 [INFO][3679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--k2cw7-eth0 goldmane-7988f88666- calico-system b82a28a4-7ccf-49bb-8f82-e329e2c83546 1077 0 2025-09-06 00:00:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-k2cw7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif4f33687637 [] [] }} ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Namespace="calico-system" Pod="goldmane-7988f88666-k2cw7" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--k2cw7-" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.282 [INFO][3679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Namespace="calico-system" Pod="goldmane-7988f88666-k2cw7" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.336 [INFO][3718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" HandleID="k8s-pod-network.caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Workload="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.337 [INFO][3718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" HandleID="k8s-pod-network.caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Workload="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-k2cw7", "timestamp":"2025-09-06 00:00:55.336245693 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.337 [INFO][3718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.337 [INFO][3718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.337 [INFO][3718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.354 [INFO][3718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" host="localhost" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.383 [INFO][3718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.393 [INFO][3718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.397 [INFO][3718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.404 [INFO][3718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.404 [INFO][3718] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" host="localhost" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.409 [INFO][3718] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259 Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.418 [INFO][3718] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" host="localhost" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.427 [INFO][3718] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" host="localhost" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.427 [INFO][3718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" host="localhost" Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.427 [INFO][3718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:55.462072 env[1322]: 2025-09-06 00:00:55.427 [INFO][3718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" HandleID="k8s-pod-network.caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Workload="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.462686 env[1322]: 2025-09-06 00:00:55.430 [INFO][3679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Namespace="calico-system" Pod="goldmane-7988f88666-k2cw7" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--k2cw7-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b82a28a4-7ccf-49bb-8f82-e329e2c83546", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-k2cw7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4f33687637", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:55.462686 env[1322]: 2025-09-06 00:00:55.430 [INFO][3679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Namespace="calico-system" Pod="goldmane-7988f88666-k2cw7" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.462686 env[1322]: 2025-09-06 00:00:55.430 [INFO][3679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4f33687637 ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Namespace="calico-system" Pod="goldmane-7988f88666-k2cw7" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.462686 env[1322]: 2025-09-06 00:00:55.443 [INFO][3679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Namespace="calico-system" Pod="goldmane-7988f88666-k2cw7" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.462686 env[1322]: 2025-09-06 00:00:55.445 [INFO][3679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Namespace="calico-system" Pod="goldmane-7988f88666-k2cw7" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--k2cw7-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b82a28a4-7ccf-49bb-8f82-e329e2c83546", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259", Pod:"goldmane-7988f88666-k2cw7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4f33687637", MAC:"a6:47:df:b3:3f:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:55.462686 env[1322]: 2025-09-06 00:00:55.460 [INFO][3679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259" Namespace="calico-system" Pod="goldmane-7988f88666-k2cw7" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--k2cw7-eth0" Sep 6 00:00:55.478929 env[1322]: time="2025-09-06T00:00:55.478650922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:00:55.478929 env[1322]: time="2025-09-06T00:00:55.478704761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:00:55.478929 env[1322]: time="2025-09-06T00:00:55.478716281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:00:55.479273 env[1322]: time="2025-09-06T00:00:55.479190918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259 pid=3777 runtime=io.containerd.runc.v2 Sep 6 00:00:55.505387 systemd[1]: run-netns-cni\x2d88e9fc1c\x2db173\x2d0cca\x2ddaeb\x2db23f3574338c.mount: Deactivated successfully. Sep 6 00:00:55.511303 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:00:55.532435 systemd-networkd[1097]: cali1b8287b199c: Link UP Sep 6 00:00:55.533652 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1b8287b199c: link becomes ready Sep 6 00:00:55.534518 systemd-networkd[1097]: cali1b8287b199c: Gained carrier Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.259 [INFO][3668] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.275 [INFO][3668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0 calico-apiserver-594cfdd89c- calico-apiserver ce984b9e-b1d5-41ed-b8a6-43f216d53a5a 1076 0 2025-09-06 00:00:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:594cfdd89c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-594cfdd89c-t5f8l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b8287b199c [] [] }} ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-t5f8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.275 [INFO][3668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-t5f8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.337 [INFO][3712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" HandleID="k8s-pod-network.1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Workload="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.337 [INFO][3712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" HandleID="k8s-pod-network.1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Workload="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-594cfdd89c-t5f8l", "timestamp":"2025-09-06 00:00:55.336987288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.337 [INFO][3712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.428 [INFO][3712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.428 [INFO][3712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.453 [INFO][3712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" host="localhost" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.471 [INFO][3712] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.493 [INFO][3712] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.495 [INFO][3712] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.499 [INFO][3712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.506 [INFO][3712] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" host="localhost" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.509 [INFO][3712] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3 Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.514 [INFO][3712] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" host="localhost" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.521 [INFO][3712] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" host="localhost" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.521 [INFO][3712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" host="localhost" Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.522 [INFO][3712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:55.551030 env[1322]: 2025-09-06 00:00:55.522 [INFO][3712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" HandleID="k8s-pod-network.1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Workload="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.552038 env[1322]: 2025-09-06 00:00:55.529 [INFO][3668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-t5f8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0", GenerateName:"calico-apiserver-594cfdd89c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce984b9e-b1d5-41ed-b8a6-43f216d53a5a", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"594cfdd89c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-594cfdd89c-t5f8l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b8287b199c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:55.552038 env[1322]: 2025-09-06 00:00:55.530 [INFO][3668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-t5f8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.552038 env[1322]: 2025-09-06 00:00:55.530 [INFO][3668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b8287b199c ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-t5f8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.552038 env[1322]: 2025-09-06 00:00:55.535 [INFO][3668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-t5f8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.552038 env[1322]: 2025-09-06 00:00:55.536 [INFO][3668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-t5f8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0", GenerateName:"calico-apiserver-594cfdd89c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce984b9e-b1d5-41ed-b8a6-43f216d53a5a", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"594cfdd89c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3", Pod:"calico-apiserver-594cfdd89c-t5f8l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b8287b199c", MAC:"9e:fb:00:de:e8:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:55.552038 env[1322]: 2025-09-06 00:00:55.546 [INFO][3668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-t5f8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--t5f8l-eth0" Sep 6 00:00:55.552038 env[1322]: time="2025-09-06T00:00:55.551405267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-k2cw7,Uid:b82a28a4-7ccf-49bb-8f82-e329e2c83546,Namespace:calico-system,Attempt:1,} returns sandbox id \"caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259\"" Sep 6 00:00:55.562949 env[1322]: time="2025-09-06T00:00:55.562895162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 6 00:00:55.586000 audit[3872]: AVC avc: denied { write } for pid=3872 comm="tee" name="fd" dev="proc" ino=21832 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:00:55.589430 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:00:55.589517 kernel: audit: type=1400 audit(1757116855.586:322): avc: denied { write } for pid=3872 comm="tee" name="fd" dev="proc" ino=21832 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:00:55.589561 kernel: audit: type=1300 audit(1757116855.586:322): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffccbba7ef a2=241 a3=1b6 items=1 ppid=3837 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.586000 audit[3872]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffccbba7ef a2=241 a3=1b6 items=1 ppid=3837 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.592487 kernel: audit: type=1307 audit(1757116855.586:322): cwd="/etc/service/enabled/cni/log" Sep 6 00:00:55.586000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 6 00:00:55.586000 audit: PATH item=0 name="/dev/fd/63" inode=20785 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:00:55.595657 kernel: audit: type=1302 audit(1757116855.586:322): item=0 name="/dev/fd/63" inode=20785 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:00:55.595715 kernel: audit: type=1327 audit(1757116855.586:322): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:00:55.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:00:55.593000 audit[3866]: AVC avc: denied { write } for pid=3866 comm="tee" name="fd" dev="proc" ino=21836 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:00:55.600009 kernel: audit: type=1400 audit(1757116855.593:323): avc: denied { write } for pid=3866 comm="tee" name="fd" dev="proc" ino=21836 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:00:55.593000 audit[3866]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd67af7ed a2=241 a3=1b6 items=1 ppid=3825 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.605423 env[1322]: time="2025-09-06T00:00:55.604572644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:00:55.605423 env[1322]: time="2025-09-06T00:00:55.604616964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:00:55.605423 env[1322]: time="2025-09-06T00:00:55.604627004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:00:55.605423 env[1322]: time="2025-09-06T00:00:55.604796603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3 pid=3890 runtime=io.containerd.runc.v2 Sep 6 00:00:55.606564 kernel: audit: type=1300 audit(1757116855.593:323): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd67af7ed a2=241 a3=1b6 items=1 ppid=3825 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.593000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 6 00:00:55.610258 kernel: audit: type=1307 audit(1757116855.593:323): cwd="/etc/service/enabled/felix/log" Sep 6 00:00:55.610335 kernel: audit: type=1302 audit(1757116855.593:323): item=0 name="/dev/fd/63" inode=21825 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:00:55.593000 audit: PATH item=0 name="/dev/fd/63" inode=21825 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:00:55.614568 kernel: audit: type=1327 audit(1757116855.593:323): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:00:55.593000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:00:55.600000 audit[3887]: AVC avc: denied { write } for pid=3887 comm="tee" name="fd" dev="proc" ino=21845 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:00:55.600000 audit[3887]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc0f4d7dd a2=241 a3=1b6 items=1 ppid=3838 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.600000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 6 00:00:55.600000 audit: PATH item=0 name="/dev/fd/63" inode=21842 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:00:55.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:00:55.623000 audit[3875]: AVC avc: denied { write } for pid=3875 comm="tee" name="fd" dev="proc" ino=20129 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:00:55.623000 audit[3875]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd33cd7de a2=241 a3=1b6 items=1 ppid=3842 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.623000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 6 00:00:55.623000 audit: PATH item=0 name="/dev/fd/63" inode=20786 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:00:55.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:00:55.634600 systemd[1]: run-containerd-runc-k8s.io-1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3-runc.yOtzjD.mount: Deactivated successfully. Sep 6 00:00:55.649000 audit[3914]: AVC avc: denied { write } for pid=3914 comm="tee" name="fd" dev="proc" ino=20138 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:00:55.649000 audit[3914]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff6d147ee a2=241 a3=1b6 items=1 ppid=3831 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.649000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 6 00:00:55.649000 audit: PATH item=0 name="/dev/fd/63" inode=20797 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:00:55.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:00:55.661805 systemd-networkd[1097]: cali3f7aee137ca: Link UP Sep 6 00:00:55.662745 systemd-networkd[1097]: cali3f7aee137ca: Gained carrier Sep 6 00:00:55.663581 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3f7aee137ca: link becomes ready Sep 6 00:00:55.664000 audit[3923]: AVC avc: denied { write } for pid=3923 comm="tee" name="fd" dev="proc" ino=20159 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:00:55.664000 audit[3923]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffde3a57ed a2=241 a3=1b6 items=1 ppid=3834 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.664000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 6 00:00:55.664000 audit: PATH item=0 name="/dev/fd/63" inode=20135 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:00:55.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:00:55.681153 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.296 [INFO][3691] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.320 [INFO][3691] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0 coredns-7c65d6cfc9- kube-system 91147e16-46a1-4693-89ba-b68a85115252 1078 0 2025-09-05 23:59:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mvwsc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f7aee137ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvwsc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvwsc-" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.320 [INFO][3691] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvwsc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.382 [INFO][3746] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" HandleID="k8s-pod-network.7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Workload="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.382 [INFO][3746] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" HandleID="k8s-pod-network.7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Workload="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mvwsc", "timestamp":"2025-09-06 00:00:55.382645068 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.382 [INFO][3746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.521 [INFO][3746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.521 [INFO][3746] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.553 [INFO][3746] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" host="localhost" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.573 [INFO][3746] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.603 [INFO][3746] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.605 [INFO][3746] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.609 [INFO][3746] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.609 [INFO][3746] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" host="localhost" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.617 [INFO][3746] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19 Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.624 [INFO][3746] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" host="localhost" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.650 [INFO][3746] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" host="localhost" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.650 [INFO][3746] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" host="localhost" Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.650 [INFO][3746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:55.681283 env[1322]: 2025-09-06 00:00:55.650 [INFO][3746] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" HandleID="k8s-pod-network.7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Workload="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.681849 env[1322]: 2025-09-06 00:00:55.654 [INFO][3691] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvwsc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"91147e16-46a1-4693-89ba-b68a85115252", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mvwsc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f7aee137ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:55.681849 env[1322]: 2025-09-06 00:00:55.654 [INFO][3691] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvwsc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.681849 env[1322]: 2025-09-06 00:00:55.654 [INFO][3691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f7aee137ca ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvwsc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.681849 env[1322]: 2025-09-06 00:00:55.663 [INFO][3691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvwsc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.681849 env[1322]: 2025-09-06 00:00:55.665 [INFO][3691] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvwsc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"91147e16-46a1-4693-89ba-b68a85115252", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19", Pod:"coredns-7c65d6cfc9-mvwsc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f7aee137ca", MAC:"3a:ea:7b:d9:b7:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:55.681849 env[1322]: 2025-09-06 00:00:55.678 [INFO][3691] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvwsc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvwsc-eth0" Sep 6 00:00:55.684000 audit[3922]: AVC avc: denied { write } for pid=3922 comm="tee" name="fd" dev="proc" ino=21901 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:00:55.684000 audit[3922]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe773d7ed a2=241 a3=1b6 items=1 ppid=3827 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.684000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 6 00:00:55.684000 audit: PATH item=0 name="/dev/fd/63" inode=22627 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:00:55.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:00:55.706716 env[1322]: time="2025-09-06T00:00:55.706624343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:00:55.706716 env[1322]: time="2025-09-06T00:00:55.706670263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:00:55.706716 env[1322]: time="2025-09-06T00:00:55.706693503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:00:55.709010 env[1322]: time="2025-09-06T00:00:55.707630057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19 pid=3957 runtime=io.containerd.runc.v2 Sep 6 00:00:55.742628 env[1322]: time="2025-09-06T00:00:55.742577298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cfdd89c-t5f8l,Uid:ce984b9e-b1d5-41ed-b8a6-43f216d53a5a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3\"" Sep 6 00:00:55.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.34:22-10.0.0.1:59218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:55.765636 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:59218.service. Sep 6 00:00:55.769506 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:00:55.816752 env[1322]: time="2025-09-06T00:00:55.816701796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mvwsc,Uid:91147e16-46a1-4693-89ba-b68a85115252,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19\"" Sep 6 00:00:55.819179 kubelet[2108]: E0906 00:00:55.819143 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:55.823742 env[1322]: time="2025-09-06T00:00:55.823698196Z" level=info msg="CreateContainer within sandbox \"7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit: BPF prog-id=10 op=LOAD Sep 6 00:00:55.837000 audit[4011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd208baa8 a2=98 a3=ffffd208ba98 items=0 ppid=3833 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.837000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:00:55.837000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit: BPF prog-id=11 op=LOAD Sep 6 00:00:55.837000 audit[4011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd208b958 a2=74 a3=95 items=0 ppid=3833 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.837000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:00:55.837000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { bpf } for pid=4011 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit: BPF prog-id=12 op=LOAD Sep 6 00:00:55.837000 audit[4011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd208b988 a2=40 a3=ffffd208b9b8 items=0 ppid=3833 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.837000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:00:55.837000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:00:55.837000 audit[4011]: AVC avc: denied { perfmon } for pid=4011 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.837000 audit[4011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffd208baa0 a2=50 a3=0 items=0 ppid=3833 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.837000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit: BPF prog-id=13 op=LOAD Sep 6 00:00:55.840000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff33756d8 a2=98 a3=fffff33756c8 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.840000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.840000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit: BPF prog-id=14 op=LOAD Sep 6 00:00:55.840000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3375368 a2=74 a3=95 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.840000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.840000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.840000 audit: BPF prog-id=15 op=LOAD Sep 6 00:00:55.840000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff33753c8 a2=94 a3=2 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.840000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.840000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:00:55.841000 audit[3996]: USER_ACCT pid=3996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:55.842000 audit[3996]: CRED_ACQ pid=3996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:55.842000 audit[3996]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe462b7a0 a2=3 a3=1 items=0 ppid=1 pid=3996 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.842000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:55.844884 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 59218 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:00:55.882048 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:00:55.889656 systemd[1]: Started session-12.scope. Sep 6 00:00:55.889857 systemd-logind[1310]: New session 12 of user core. Sep 6 00:00:55.894000 audit[3996]: USER_START pid=3996 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:55.896000 audit[4022]: CRED_ACQ pid=4022 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:55.906089 env[1322]: time="2025-09-06T00:00:55.906035167Z" level=info msg="CreateContainer within sandbox \"7ff9060f18741379907506854803c4cb27e374e4058008d8d15e21e8101c0f19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab0c0f85a8cbfafb6cec2b715fe72255ab1157df3068ba0f2a655691276e3b4c\"" Sep 6 00:00:55.906806 env[1322]: time="2025-09-06T00:00:55.906772923Z" level=info msg="StartContainer for \"ab0c0f85a8cbfafb6cec2b715fe72255ab1157df3068ba0f2a655691276e3b4c\"" Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit: BPF prog-id=16 op=LOAD Sep 6 00:00:55.948000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3375388 a2=40 a3=fffff33753b8 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.948000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.948000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:00:55.948000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.948000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff33754a0 a2=50 a3=0 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.948000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff33753f8 a2=28 a3=fffff3375528 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff3375428 a2=28 a3=fffff3375558 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff33752d8 a2=28 a3=fffff3375408 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff3375448 a2=28 a3=fffff3375578 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff3375428 a2=28 a3=fffff3375558 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff3375418 a2=28 a3=fffff3375548 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff3375448 a2=28 a3=fffff3375578 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff3375428 a2=28 a3=fffff3375558 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff3375448 a2=28 a3=fffff3375578 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff3375418 a2=28 a3=fffff3375548 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff3375498 a2=28 a3=fffff33755d8 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff33751d0 a2=50 a3=0 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit: BPF prog-id=17 op=LOAD Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff33751d8 a2=94 a3=5 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff33752e0 a2=50 a3=0 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff3375428 a2=4 a3=3 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.958000 audit[4013]: AVC avc: denied { confidentiality } for pid=4013 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:00:55.958000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff3375408 a2=94 a3=6 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { confidentiality } for pid=4013 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:00:55.959000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff3374bd8 a2=94 a3=83 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.959000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { perfmon } for pid=4013 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { bpf } for pid=4013 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.959000 audit[4013]: AVC avc: denied { confidentiality } for pid=4013 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:00:55.959000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff3374bd8 a2=94 a3=83 items=0 ppid=3833 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.959000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit: BPF prog-id=18 op=LOAD Sep 6 00:00:55.970000 audit[4077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3542d88 a2=98 a3=ffffe3542d78 items=0 ppid=3833 pid=4077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.970000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:00:55.970000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit: BPF prog-id=19 op=LOAD Sep 6 00:00:55.970000 audit[4077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3542c38 a2=74 a3=95 items=0 ppid=3833 pid=4077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.970000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:00:55.970000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { perfmon } for pid=4077 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit[4077]: AVC avc: denied { bpf } for pid=4077 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:55.970000 audit: BPF prog-id=20 op=LOAD Sep 6 00:00:55.970000 audit[4077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3542c68 a2=40 a3=ffffe3542c98 items=0 ppid=3833 pid=4077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:55.970000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:00:55.970000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:00:55.975378 env[1322]: time="2025-09-06T00:00:55.973386264Z" level=info msg="StartContainer for \"ab0c0f85a8cbfafb6cec2b715fe72255ab1157df3068ba0f2a655691276e3b4c\" returns successfully" Sep 6 00:00:56.078357 sshd[3996]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:56.084000 audit[3996]: USER_END pid=3996 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.084000 audit[3996]: CRED_DISP pid=3996 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.34:22-10.0.0.1:59228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:56.085322 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:59228.service. Sep 6 00:00:56.098045 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:59218.service: Deactivated successfully. Sep 6 00:00:56.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.34:22-10.0.0.1:59218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:56.099318 env[1322]: time="2025-09-06T00:00:56.098894874Z" level=info msg="StopPodSandbox for \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\"" Sep 6 00:00:56.099912 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:00:56.100240 systemd-logind[1310]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:00:56.101993 kubelet[2108]: I0906 00:00:56.101381 2108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ad0d4a-3293-484b-9672-41f544529dfe" path="/var/lib/kubelet/pods/d3ad0d4a-3293-484b-9672-41f544529dfe/volumes" Sep 6 00:00:56.104256 systemd-logind[1310]: Removed session 12. Sep 6 00:00:56.105549 systemd-networkd[1097]: vxlan.calico: Link UP Sep 6 00:00:56.105638 systemd-networkd[1097]: vxlan.calico: Gained carrier Sep 6 00:00:56.117000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.117000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.117000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.117000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.117000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.117000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.117000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.117000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.117000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.117000 audit: BPF prog-id=21 op=LOAD Sep 6 00:00:56.117000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe444a3e8 a2=98 a3=ffffe444a3d8 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.117000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit: BPF prog-id=22 op=LOAD Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe444a0c8 a2=74 a3=95 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit: BPF prog-id=23 op=LOAD Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe444a128 a2=94 a3=2 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe444a158 a2=28 a3=ffffe444a288 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe444a188 a2=28 a3=ffffe444a2b8 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe444a038 a2=28 a3=ffffe444a168 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe444a1a8 a2=28 a3=ffffe444a2d8 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe444a188 a2=28 a3=ffffe444a2b8 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe444a178 a2=28 a3=ffffe444a2a8 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe444a1a8 a2=28 a3=ffffe444a2d8 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe444a188 a2=28 a3=ffffe444a2b8 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe444a1a8 a2=28 a3=ffffe444a2d8 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe444a178 a2=28 a3=ffffe444a2a8 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe444a1f8 a2=28 a3=ffffe444a338 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit: BPF prog-id=24 op=LOAD Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe444a018 a2=40 a3=ffffe444a048 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffe444a040 a2=50 a3=0 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffe444a040 a2=50 a3=0 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.118000 audit: BPF prog-id=25 op=LOAD Sep 6 00:00:56.118000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe44497a8 a2=94 a3=2 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.118000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.119000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { perfmon } for pid=4124 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit[4124]: AVC avc: denied { bpf } for pid=4124 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.119000 audit: BPF prog-id=26 op=LOAD Sep 6 00:00:56.119000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe4449938 a2=94 a3=30 items=0 ppid=3833 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.119000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:00:56.122000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.122000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.122000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.122000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.122000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.122000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.122000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.122000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.122000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.122000 audit: BPF prog-id=27 op=LOAD Sep 6 00:00:56.122000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1515ab8 a2=98 a3=ffffd1515aa8 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.122000 audit: BPF prog-id=27 op=UNLOAD Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit: BPF prog-id=28 op=LOAD Sep 6 00:00:56.123000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd1515748 a2=74 a3=95 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.123000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.123000 audit: BPF prog-id=28 op=UNLOAD Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.123000 audit: BPF prog-id=29 op=LOAD Sep 6 00:00:56.123000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd15157a8 a2=94 a3=2 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.123000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.123000 audit: BPF prog-id=29 op=UNLOAD Sep 6 00:00:56.147000 audit[4101]: USER_ACCT pid=4101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.148038 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 59228 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:00:56.148000 audit[4101]: CRED_ACQ pid=4101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.149000 audit[4101]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3954480 a2=3 a3=1 items=0 ppid=1 pid=4101 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.149000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:56.149905 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:00:56.154621 systemd[1]: Started session-13.scope. Sep 6 00:00:56.157933 systemd-logind[1310]: New session 13 of user core. Sep 6 00:00:56.168000 audit[4101]: USER_START pid=4101 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.170000 audit[4144]: CRED_ACQ pid=4144 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit: BPF prog-id=30 op=LOAD Sep 6 00:00:56.216000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd1515768 a2=40 a3=ffffd1515798 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.216000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.216000 audit: BPF prog-id=30 op=UNLOAD Sep 6 00:00:56.216000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.216000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd1515880 a2=50 a3=0 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.216000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd15157d8 a2=28 a3=ffffd1515908 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1515808 a2=28 a3=ffffd1515938 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd15156b8 a2=28 a3=ffffd15157e8 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1515828 a2=28 a3=ffffd1515958 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1515808 a2=28 a3=ffffd1515938 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd15157f8 a2=28 a3=ffffd1515928 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1515828 a2=28 a3=ffffd1515958 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1515808 a2=28 a3=ffffd1515938 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1515828 a2=28 a3=ffffd1515958 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd15157f8 a2=28 a3=ffffd1515928 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1515878 a2=28 a3=ffffd15159b8 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd15155b0 a2=50 a3=0 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.226000 audit: BPF prog-id=31 op=LOAD Sep 6 00:00:56.226000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd15155b8 a2=94 a3=5 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.227000 audit: BPF prog-id=31 op=UNLOAD Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd15156c0 a2=50 a3=0 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd1515808 a2=4 a3=3 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { confidentiality } for pid=4132 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:00:56.227000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd15157e8 a2=94 a3=6 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { confidentiality } for pid=4132 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:00:56.227000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd1514fb8 a2=94 a3=83 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.227000 audit[4132]: AVC avc: denied { confidentiality } for pid=4132 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:00:56.227000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd1514fb8 a2=94 a3=83 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.228000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.228000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd15169f8 a2=10 a3=ffffd1516ae8 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.228000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.228000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd15168b8 a2=10 a3=ffffd15169a8 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.228000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.228000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd1516828 a2=10 a3=ffffd15169a8 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.228000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:00:56.228000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd1516828 a2=10 a3=ffffd15169a8 items=0 ppid=3833 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:00:56.235000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.183 [INFO][4127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.184 [INFO][4127] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" iface="eth0" netns="/var/run/netns/cni-8b1f5967-dd6c-e53c-ae50-7a8469f49d03" Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.184 [INFO][4127] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" iface="eth0" netns="/var/run/netns/cni-8b1f5967-dd6c-e53c-ae50-7a8469f49d03" Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.184 [INFO][4127] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" iface="eth0" netns="/var/run/netns/cni-8b1f5967-dd6c-e53c-ae50-7a8469f49d03" Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.184 [INFO][4127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.184 [INFO][4127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.211 [INFO][4146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" HandleID="k8s-pod-network.7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" Workload="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.211 [INFO][4146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.211 [INFO][4146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.227 [WARNING][4146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" HandleID="k8s-pod-network.7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" Workload="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.227 [INFO][4146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" HandleID="k8s-pod-network.7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" Workload="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.236 [INFO][4146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:56.240830 env[1322]: 2025-09-06 00:00:56.238 [INFO][4127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78" Sep 6 00:00:56.241406 env[1322]: time="2025-09-06T00:00:56.240956431Z" level=info msg="TearDown network for sandbox \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\" successfully" Sep 6 00:00:56.241406 env[1322]: time="2025-09-06T00:00:56.240986151Z" level=info msg="StopPodSandbox for \"7473d806994237b94a2723ba83ad6158bfc26c1f51c0452bbb61408434523a78\" returns successfully" Sep 6 00:00:56.241791 kubelet[2108]: E0906 00:00:56.241766 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:56.242209 env[1322]: time="2025-09-06T00:00:56.242166104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vcwrt,Uid:a9f5bb4e-c6c9-4116-9894-6226c1ed909d,Namespace:kube-system,Attempt:1,}" Sep 6 00:00:56.294792 kubelet[2108]: E0906 00:00:56.294549 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:56.296000 audit[4198]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=4198 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:00:56.296000 audit[4198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffeb504d70 a2=0 a3=ffffb3705fa8 items=0 ppid=3833 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.296000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:00:56.314727 kubelet[2108]: I0906 00:00:56.314651 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mvwsc" podStartSLOduration=60.314631975 podStartE2EDuration="1m0.314631975s" podCreationTimestamp="2025-09-05 23:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:00:56.31380834 +0000 UTC m=+66.305246198" watchObservedRunningTime="2025-09-06 00:00:56.314631975 +0000 UTC m=+66.306069833" Sep 6 00:00:56.319000 audit[4197]: NETFILTER_CFG table=raw:102 family=2 entries=21 op=nft_register_chain pid=4197 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:00:56.319000 audit[4197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffefa64440 a2=0 a3=ffffbe337fa8 items=0 ppid=3833 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.319000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:00:56.323000 audit[4206]: NETFILTER_CFG table=nat:103 family=2 entries=15 op=nft_register_chain pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:00:56.323000 audit[4206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffde1b3fd0 a2=0 a3=ffffa6bb8fa8 items=0 ppid=3833 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.323000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:00:56.336000 audit[4207]: NETFILTER_CFG table=filter:104 family=2 entries=157 op=nft_register_chain pid=4207 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:00:56.336000 audit[4207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=89184 a0=3 a1=ffffc4b26af0 a2=0 a3=ffffbf433fa8 items=0 ppid=3833 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.336000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:00:56.389000 audit[4243]: NETFILTER_CFG table=filter:105 family=2 entries=20 op=nft_register_rule pid=4243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:56.389000 audit[4243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd5acc280 a2=0 a3=1 items=0 ppid=2218 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:56.394000 audit[4243]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=4243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:56.394000 audit[4243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd5acc280 a2=0 a3=1 items=0 ppid=2218 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:56.419000 audit[4248]: NETFILTER_CFG table=filter:107 family=2 entries=17 op=nft_register_rule pid=4248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:56.419000 audit[4248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe9952d80 a2=0 a3=1 items=0 ppid=2218 pid=4248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:56.425000 audit[4248]: NETFILTER_CFG table=nat:108 family=2 entries=35 op=nft_register_chain pid=4248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:56.425000 audit[4248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe9952d80 a2=0 a3=1 items=0 ppid=2218 pid=4248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:56.458337 systemd-networkd[1097]: calie1c52bc186f: Link UP Sep 6 00:00:56.460818 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie1c52bc186f: link becomes ready Sep 6 00:00:56.460541 systemd-networkd[1097]: calie1c52bc186f: Gained carrier Sep 6 00:00:56.470680 sshd[4101]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:56.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.34:22-10.0.0.1:59242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:56.472812 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:59242.service. Sep 6 00:00:56.474000 audit[4101]: USER_END pid=4101 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.474000 audit[4101]: CRED_DISP pid=4101 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.482773 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:59228.service: Deactivated successfully. Sep 6 00:00:56.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.34:22-10.0.0.1:59228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:56.483621 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:00:56.485150 systemd-logind[1310]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:00:56.486041 systemd-logind[1310]: Removed session 13. Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.321 [INFO][4173] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0 coredns-7c65d6cfc9- kube-system a9f5bb4e-c6c9-4116-9894-6226c1ed909d 1116 0 2025-09-05 23:59:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-vcwrt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1c52bc186f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vcwrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vcwrt-" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.321 [INFO][4173] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vcwrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.383 [INFO][4227] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" HandleID="k8s-pod-network.15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Workload="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.383 [INFO][4227] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" HandleID="k8s-pod-network.15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Workload="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3020), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-vcwrt", "timestamp":"2025-09-06 00:00:56.383270828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.383 [INFO][4227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.383 [INFO][4227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.384 [INFO][4227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.395 [INFO][4227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" host="localhost" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.411 [INFO][4227] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.418 [INFO][4227] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.423 [INFO][4227] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.426 [INFO][4227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.426 [INFO][4227] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" host="localhost" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.429 [INFO][4227] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80 Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.436 [INFO][4227] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" host="localhost" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.448 [INFO][4227] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" host="localhost" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.448 [INFO][4227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" host="localhost" Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.448 [INFO][4227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:56.486369 env[1322]: 2025-09-06 00:00:56.448 [INFO][4227] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" HandleID="k8s-pod-network.15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Workload="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.486931 env[1322]: 2025-09-06 00:00:56.450 [INFO][4173] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vcwrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a9f5bb4e-c6c9-4116-9894-6226c1ed909d", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-vcwrt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1c52bc186f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:56.486931 env[1322]: 2025-09-06 00:00:56.450 [INFO][4173] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vcwrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.486931 env[1322]: 2025-09-06 00:00:56.451 [INFO][4173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1c52bc186f ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vcwrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.486931 env[1322]: 2025-09-06 00:00:56.459 [INFO][4173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vcwrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.486931 env[1322]: 2025-09-06 00:00:56.461 [INFO][4173] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vcwrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a9f5bb4e-c6c9-4116-9894-6226c1ed909d", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80", Pod:"coredns-7c65d6cfc9-vcwrt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1c52bc186f", MAC:"ee:bb:d0:85:1f:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:56.486931 env[1322]: 2025-09-06 00:00:56.478 [INFO][4173] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vcwrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vcwrt-eth0" Sep 6 00:00:56.495000 audit[4263]: NETFILTER_CFG table=filter:109 family=2 entries=40 op=nft_register_chain pid=4263 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:00:56.495000 audit[4263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20344 a0=3 a1=ffffc1153b60 a2=0 a3=ffffa386efa8 items=0 ppid=3833 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.495000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:00:56.504659 systemd[1]: run-netns-cni\x2d8b1f5967\x2ddd6c\x2de53c\x2dae50\x2d7a8469f49d03.mount: Deactivated successfully. Sep 6 00:00:56.518640 env[1322]: time="2025-09-06T00:00:56.518572183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:00:56.518640 env[1322]: time="2025-09-06T00:00:56.518615063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:00:56.518640 env[1322]: time="2025-09-06T00:00:56.518625823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:00:56.519049 env[1322]: time="2025-09-06T00:00:56.518895622Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80 pid=4273 runtime=io.containerd.runc.v2 Sep 6 00:00:56.541579 systemd[1]: run-containerd-runc-k8s.io-15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80-runc.bKJWCc.mount: Deactivated successfully. Sep 6 00:00:56.548000 audit[4253]: USER_ACCT pid=4253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.549731 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 59242 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:00:56.550000 audit[4253]: CRED_ACQ pid=4253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.550000 audit[4253]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe89b56a0 a2=3 a3=1 items=0 ppid=1 pid=4253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:56.550000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:00:56.551683 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:00:56.556715 systemd-logind[1310]: New session 14 of user core. Sep 6 00:00:56.557136 systemd[1]: Started session-14.scope. Sep 6 00:00:56.560286 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:00:56.562000 audit[4253]: USER_START pid=4253 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.563000 audit[4302]: CRED_ACQ pid=4302 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.581373 env[1322]: time="2025-09-06T00:00:56.581337029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vcwrt,Uid:a9f5bb4e-c6c9-4116-9894-6226c1ed909d,Namespace:kube-system,Attempt:1,} returns sandbox id \"15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80\"" Sep 6 00:00:56.582432 kubelet[2108]: E0906 00:00:56.581973 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:56.585760 env[1322]: time="2025-09-06T00:00:56.585725924Z" level=info msg="CreateContainer within sandbox \"15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:00:56.599011 env[1322]: time="2025-09-06T00:00:56.598965689Z" level=info msg="CreateContainer within sandbox \"15aee767e3dab7ba534bcf086f17aceb900aea0e27bd9fab518776b6869f4c80\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8fd137f2ba2809ad2b26440ac9df68e157072dcc1d950bc0e88d781698fc34df\"" Sep 6 00:00:56.599575 env[1322]: time="2025-09-06T00:00:56.599403287Z" level=info msg="StartContainer for \"8fd137f2ba2809ad2b26440ac9df68e157072dcc1d950bc0e88d781698fc34df\"" Sep 6 00:00:56.652202 env[1322]: time="2025-09-06T00:00:56.652141309Z" level=info msg="StartContainer for \"8fd137f2ba2809ad2b26440ac9df68e157072dcc1d950bc0e88d781698fc34df\" returns successfully" Sep 6 00:00:56.704599 sshd[4253]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:56.705000 audit[4253]: USER_END pid=4253 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.705000 audit[4253]: CRED_DISP pid=4253 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:00:56.707518 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:59242.service: Deactivated successfully. Sep 6 00:00:56.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.34:22-10.0.0.1:59242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:00:56.708534 systemd-logind[1310]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:00:56.708535 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:00:56.709523 systemd-logind[1310]: Removed session 14. Sep 6 00:00:56.781888 systemd-networkd[1097]: cali1b8287b199c: Gained IPv6LL Sep 6 00:00:56.909433 systemd-networkd[1097]: calif4f33687637: Gained IPv6LL Sep 6 00:00:57.079861 env[1322]: time="2025-09-06T00:00:57.078635864Z" level=info msg="StopPodSandbox for \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\"" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.131 [INFO][4370] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.131 [INFO][4370] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" iface="eth0" netns="/var/run/netns/cni-09d0f992-29ec-05a6-a25e-be653e1b0ae2" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.132 [INFO][4370] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" iface="eth0" netns="/var/run/netns/cni-09d0f992-29ec-05a6-a25e-be653e1b0ae2" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.132 [INFO][4370] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" iface="eth0" netns="/var/run/netns/cni-09d0f992-29ec-05a6-a25e-be653e1b0ae2" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.132 [INFO][4370] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.132 [INFO][4370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.158 [INFO][4381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" HandleID="k8s-pod-network.3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" Workload="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.158 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.158 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.166 [WARNING][4381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" HandleID="k8s-pod-network.3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" Workload="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.166 [INFO][4381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" HandleID="k8s-pod-network.3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" Workload="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.168 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:57.176721 env[1322]: 2025-09-06 00:00:57.170 [INFO][4370] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4" Sep 6 00:00:57.177375 env[1322]: time="2025-09-06T00:00:57.177335751Z" level=info msg="TearDown network for sandbox \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\" successfully" Sep 6 00:00:57.177460 env[1322]: time="2025-09-06T00:00:57.177443390Z" level=info msg="StopPodSandbox for \"3d88e802880f39a026648cee5a26e1ed62fceabf812d20055bff45fc8cf660e4\" returns successfully" Sep 6 00:00:57.178268 env[1322]: time="2025-09-06T00:00:57.178238786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cfdd89c-h4tb8,Uid:8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:00:57.287706 systemd-networkd[1097]: cali7076bacf6c9: Link UP Sep 6 00:00:57.288745 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7076bacf6c9: link becomes ready Sep 6 00:00:57.288652 systemd-networkd[1097]: cali7076bacf6c9: Gained carrier Sep 6 00:00:57.304269 kubelet[2108]: E0906 00:00:57.303150 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:57.304269 kubelet[2108]: E0906 00:00:57.303831 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.221 [INFO][4389] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0 calico-apiserver-594cfdd89c- calico-apiserver 8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57 1153 0 2025-09-06 00:00:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:594cfdd89c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-594cfdd89c-h4tb8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7076bacf6c9 [] [] }} ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-h4tb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.222 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-h4tb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.243 [INFO][4404] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" HandleID="k8s-pod-network.c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Workload="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.244 [INFO][4404] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" HandleID="k8s-pod-network.c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Workload="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cad0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-594cfdd89c-h4tb8", "timestamp":"2025-09-06 00:00:57.243972258 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.244 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.244 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.244 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.254 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" host="localhost" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.259 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.264 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.266 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.268 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.268 [INFO][4404] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" host="localhost" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.270 [INFO][4404] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.274 [INFO][4404] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" host="localhost" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.281 [INFO][4404] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" host="localhost" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.281 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" host="localhost" Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.282 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:00:57.305669 env[1322]: 2025-09-06 00:00:57.282 [INFO][4404] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" HandleID="k8s-pod-network.c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Workload="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.306352 env[1322]: 2025-09-06 00:00:57.283 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-h4tb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0", GenerateName:"calico-apiserver-594cfdd89c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"594cfdd89c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-594cfdd89c-h4tb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7076bacf6c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:57.306352 env[1322]: 2025-09-06 00:00:57.284 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-h4tb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.306352 env[1322]: 2025-09-06 00:00:57.284 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7076bacf6c9 ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-h4tb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.306352 env[1322]: 2025-09-06 00:00:57.289 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-h4tb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.306352 env[1322]: 2025-09-06 00:00:57.290 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-h4tb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0", GenerateName:"calico-apiserver-594cfdd89c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"594cfdd89c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff", Pod:"calico-apiserver-594cfdd89c-h4tb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7076bacf6c9", MAC:"ce:64:61:ca:a0:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:00:57.306352 env[1322]: 2025-09-06 00:00:57.299 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff" Namespace="calico-apiserver" Pod="calico-apiserver-594cfdd89c-h4tb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--594cfdd89c--h4tb8-eth0" Sep 6 00:00:57.316526 kubelet[2108]: I0906 00:00:57.316010 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vcwrt" podStartSLOduration=61.315972414 podStartE2EDuration="1m1.315972414s" podCreationTimestamp="2025-09-05 23:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:00:57.315470737 +0000 UTC m=+67.306908595" watchObservedRunningTime="2025-09-06 00:00:57.315972414 +0000 UTC m=+67.307410272" Sep 6 00:00:57.328796 env[1322]: time="2025-09-06T00:00:57.328676983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:00:57.328796 env[1322]: time="2025-09-06T00:00:57.328730503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:00:57.328796 env[1322]: time="2025-09-06T00:00:57.328740823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:00:57.330077 env[1322]: time="2025-09-06T00:00:57.329589978Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff pid=4429 runtime=io.containerd.runc.v2 Sep 6 00:00:57.340000 audit[4435]: NETFILTER_CFG table=filter:110 family=2 entries=49 op=nft_register_chain pid=4435 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:00:57.340000 audit[4435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25452 a0=3 a1=ffffe6122d50 a2=0 a3=ffffa80cafa8 items=0 ppid=3833 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:57.340000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:00:57.346000 audit[4447]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=4447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:57.346000 audit[4447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff1e90f00 a2=0 a3=1 items=0 ppid=2218 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:57.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:57.364054 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:00:57.364000 audit[4447]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:00:57.364000 audit[4447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff1e90f00 a2=0 a3=1 items=0 ppid=2218 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:00:57.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:00:57.381595 env[1322]: time="2025-09-06T00:00:57.381522247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cfdd89c-h4tb8,Uid:8f6fe62e-2ea5-4c6e-95b0-87c42f1c5b57,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff\"" Sep 6 00:00:57.505304 systemd[1]: run-netns-cni\x2d09d0f992\x2d29ec\x2d05a6\x2da25e\x2dbe653e1b0ae2.mount: Deactivated successfully. Sep 6 00:00:57.548728 systemd-networkd[1097]: cali3f7aee137ca: Gained IPv6LL Sep 6 00:00:57.683176 systemd[1]: run-containerd-runc-k8s.io-48e499142396eb0819d1d2fb261e2fff4b55f4937f49862f22c2d1ef1eaa1050-runc.gKLiJQ.mount: Deactivated successfully. Sep 6 00:00:57.804716 systemd-networkd[1097]: vxlan.calico: Gained IPv6LL Sep 6 00:00:58.306022 kubelet[2108]: E0906 00:00:58.305978 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:58.306631 kubelet[2108]: E0906 00:00:58.306600 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:00:58.380746 systemd-networkd[1097]: calie1c52bc186f: Gained IPv6LL Sep 6 00:00:58.828752 systemd-networkd[1097]: cali7076bacf6c9: Gained IPv6LL Sep 6 00:00:59.308364 kubelet[2108]: E0906 00:00:59.308295 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:01.707462 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:46342.service. Sep 6 00:01:01.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.34:22-10.0.0.1:46342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:01.715692 kernel: kauditd_printk_skb: 604 callbacks suppressed Sep 6 00:01:01.715828 kernel: audit: type=1130 audit(1757116861.707:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.34:22-10.0.0.1:46342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:01.761000 audit[4501]: USER_ACCT pid=4501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.764619 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 46342 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:01.765573 kernel: audit: type=1101 audit(1757116861.761:467): pid=4501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.765000 audit[4501]: CRED_ACQ pid=4501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.769766 kernel: audit: type=1103 audit(1757116861.765:468): pid=4501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.769824 kernel: audit: type=1006 audit(1757116861.767:469): pid=4501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 6 00:01:01.769849 kernel: audit: type=1300 audit(1757116861.767:469): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9b1aef0 a2=3 a3=1 items=0 ppid=1 pid=4501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:01.767000 audit[4501]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9b1aef0 a2=3 a3=1 items=0 ppid=1 pid=4501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:01.767000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:01.773858 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:01.774565 kernel: audit: type=1327 audit(1757116861.767:469): proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:01.777763 systemd-logind[1310]: New session 15 of user core. Sep 6 00:01:01.778215 systemd[1]: Started session-15.scope. Sep 6 00:01:01.781000 audit[4501]: USER_START pid=4501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.783000 audit[4505]: CRED_ACQ pid=4505 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.787838 kernel: audit: type=1105 audit(1757116861.781:470): pid=4501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.787919 kernel: audit: type=1103 audit(1757116861.783:471): pid=4505 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.846978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091787193.mount: Deactivated successfully. Sep 6 00:01:01.950941 sshd[4501]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:01.951000 audit[4501]: USER_END pid=4501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.954000 audit[4501]: CRED_DISP pid=4501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.957215 kernel: audit: type=1106 audit(1757116861.951:472): pid=4501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.957284 kernel: audit: type=1104 audit(1757116861.954:473): pid=4501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:01.958832 systemd-logind[1310]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:01:01.958987 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:46342.service: Deactivated successfully. Sep 6 00:01:01.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.34:22-10.0.0.1:46342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:01.960169 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:01:01.960584 systemd-logind[1310]: Removed session 15. Sep 6 00:01:02.455791 env[1322]: time="2025-09-06T00:01:02.455727806Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:02.457089 env[1322]: time="2025-09-06T00:01:02.457054109Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:02.458678 env[1322]: time="2025-09-06T00:01:02.458639897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:02.460249 env[1322]: time="2025-09-06T00:01:02.460221684Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:02.461555 env[1322]: time="2025-09-06T00:01:02.461516867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 6 00:01:02.463727 env[1322]: time="2025-09-06T00:01:02.463687664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:01:02.465924 env[1322]: time="2025-09-06T00:01:02.465887182Z" level=info msg="CreateContainer within sandbox \"caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 6 00:01:02.481517 env[1322]: time="2025-09-06T00:01:02.481467212Z" level=info msg="CreateContainer within sandbox \"caf272822ca3eb73d009e34126e2aeb912e8b207ea17a70de83b24b4c646c259\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"45cf9e762f0c6edb84c99ec0b337add18bf517cb31465318f2a44223c2932dea\"" Sep 6 00:01:02.482574 env[1322]: time="2025-09-06T00:01:02.482413869Z" level=info msg="StartContainer for \"45cf9e762f0c6edb84c99ec0b337add18bf517cb31465318f2a44223c2932dea\"" Sep 6 00:01:02.556376 env[1322]: time="2025-09-06T00:01:02.556324428Z" level=info msg="StartContainer for \"45cf9e762f0c6edb84c99ec0b337add18bf517cb31465318f2a44223c2932dea\" returns successfully" Sep 6 00:01:03.401000 audit[4580]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=4580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:03.401000 audit[4580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffde9ac270 a2=0 a3=1 items=0 ppid=2218 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:03.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:03.412000 audit[4580]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:03.412000 audit[4580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffde9ac270 a2=0 a3=1 items=0 ppid=2218 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:03.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:04.078578 kubelet[2108]: E0906 00:01:04.078150 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:04.598158 env[1322]: time="2025-09-06T00:01:04.598112294Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:04.599421 env[1322]: time="2025-09-06T00:01:04.599393794Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:04.601495 env[1322]: time="2025-09-06T00:01:04.601462908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:04.602864 env[1322]: time="2025-09-06T00:01:04.602828210Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:04.603431 env[1322]: time="2025-09-06T00:01:04.603389779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 6 00:01:04.604915 env[1322]: time="2025-09-06T00:01:04.604889043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:01:04.607089 env[1322]: time="2025-09-06T00:01:04.607046158Z" level=info msg="CreateContainer within sandbox \"1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:01:04.623834 env[1322]: time="2025-09-06T00:01:04.623776707Z" level=info msg="CreateContainer within sandbox \"1cbe902dbe780d23e7154d4245c5be76edf81ce1b6a4c4cfc482fa49a0e45bd3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c9af79341f148f2dcd01c2df34018b6b5d707a47a43e5a0bbd1917ebbb99c6c3\"" Sep 6 00:01:04.625526 env[1322]: time="2025-09-06T00:01:04.624435998Z" level=info msg="StartContainer for \"c9af79341f148f2dcd01c2df34018b6b5d707a47a43e5a0bbd1917ebbb99c6c3\"" Sep 6 00:01:04.691333 env[1322]: time="2025-09-06T00:01:04.691250954Z" level=info msg="StartContainer for \"c9af79341f148f2dcd01c2df34018b6b5d707a47a43e5a0bbd1917ebbb99c6c3\" returns successfully" Sep 6 00:01:05.334498 kubelet[2108]: I0906 00:01:05.334438 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-k2cw7" podStartSLOduration=49.425363484 podStartE2EDuration="56.334420928s" podCreationTimestamp="2025-09-06 00:00:09 +0000 UTC" firstStartedPulling="2025-09-06 00:00:55.554138372 +0000 UTC m=+65.545576230" lastFinishedPulling="2025-09-06 00:01:02.463195816 +0000 UTC m=+72.454633674" observedRunningTime="2025-09-06 00:01:03.342029583 +0000 UTC m=+73.333467401" watchObservedRunningTime="2025-09-06 00:01:05.334420928 +0000 UTC m=+75.325858786" Sep 6 00:01:05.334930 kubelet[2108]: I0906 00:01:05.334884 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-594cfdd89c-t5f8l" podStartSLOduration=51.474036191 podStartE2EDuration="1m0.334876216s" podCreationTimestamp="2025-09-06 00:00:05 +0000 UTC" firstStartedPulling="2025-09-06 00:00:55.743717052 +0000 UTC m=+65.735154910" lastFinishedPulling="2025-09-06 00:01:04.604557077 +0000 UTC m=+74.595994935" observedRunningTime="2025-09-06 00:01:05.334344607 +0000 UTC m=+75.325782465" watchObservedRunningTime="2025-09-06 00:01:05.334876216 +0000 UTC m=+75.326314074" Sep 6 00:01:05.350000 audit[4645]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:05.350000 audit[4645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcd008720 a2=0 a3=1 items=0 ppid=2218 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:05.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:05.357000 audit[4645]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:05.357000 audit[4645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcd008720 a2=0 a3=1 items=0 ppid=2218 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:05.357000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:06.325986 kubelet[2108]: I0906 00:01:06.325952 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:01:06.889533 env[1322]: time="2025-09-06T00:01:06.889483728Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:06.892188 env[1322]: time="2025-09-06T00:01:06.892149288Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:06.896033 env[1322]: time="2025-09-06T00:01:06.895982946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:06.899243 env[1322]: time="2025-09-06T00:01:06.899196714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:06.899888 env[1322]: time="2025-09-06T00:01:06.899853124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 6 00:01:06.902430 env[1322]: time="2025-09-06T00:01:06.902392802Z" level=info msg="CreateContainer within sandbox \"c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:01:06.920644 env[1322]: time="2025-09-06T00:01:06.920586195Z" level=info msg="CreateContainer within sandbox \"c9b80252d39ac119a4149b861431b7a949299c10ae418ef749bb5840b75bf7ff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"45505c6e95267a4bfeef51f3dba5444f0c623ede6a5f63dd95db622cccdcdb3d\"" Sep 6 00:01:06.921229 env[1322]: time="2025-09-06T00:01:06.921201284Z" level=info msg="StartContainer for \"45505c6e95267a4bfeef51f3dba5444f0c623ede6a5f63dd95db622cccdcdb3d\"" Sep 6 00:01:06.942917 systemd[1]: run-containerd-runc-k8s.io-45505c6e95267a4bfeef51f3dba5444f0c623ede6a5f63dd95db622cccdcdb3d-runc.ziHt56.mount: Deactivated successfully. Sep 6 00:01:06.957802 kernel: kauditd_printk_skb: 13 callbacks suppressed Sep 6 00:01:06.957921 kernel: audit: type=1130 audit(1757116866.953:479): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.34:22-10.0.0.1:46354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:06.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.34:22-10.0.0.1:46354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:06.954291 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:46354.service. Sep 6 00:01:06.994803 env[1322]: time="2025-09-06T00:01:06.993128442Z" level=info msg="StartContainer for \"45505c6e95267a4bfeef51f3dba5444f0c623ede6a5f63dd95db622cccdcdb3d\" returns successfully" Sep 6 00:01:07.007000 audit[4676]: USER_ACCT pid=4676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.008936 sshd[4676]: Accepted publickey for core from 10.0.0.1 port 46354 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:07.010438 sshd[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:07.008000 audit[4676]: CRED_ACQ pid=4676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.013453 kernel: audit: type=1101 audit(1757116867.007:480): pid=4676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.013532 kernel: audit: type=1103 audit(1757116867.008:481): pid=4676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.013576 kernel: audit: type=1006 audit(1757116867.008:482): pid=4676 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 6 00:01:07.015148 systemd[1]: Started session-16.scope. Sep 6 00:01:07.008000 audit[4676]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0251f70 a2=3 a3=1 items=0 ppid=1 pid=4676 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:07.015491 systemd-logind[1310]: New session 16 of user core. Sep 6 00:01:07.019584 kernel: audit: type=1300 audit(1757116867.008:482): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0251f70 a2=3 a3=1 items=0 ppid=1 pid=4676 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:07.019669 kernel: audit: type=1327 audit(1757116867.008:482): proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:07.008000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:07.021000 audit[4676]: USER_START pid=4676 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.023000 audit[4690]: CRED_ACQ pid=4690 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.027708 kernel: audit: type=1105 audit(1757116867.021:483): pid=4676 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.027782 kernel: audit: type=1103 audit(1757116867.023:484): pid=4690 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.078718 env[1322]: time="2025-09-06T00:01:07.078637042Z" level=info msg="StopPodSandbox for \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\"" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.176 [INFO][4715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.176 [INFO][4715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" iface="eth0" netns="/var/run/netns/cni-f12de3a3-791f-a7b0-3148-96d6507040eb" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.176 [INFO][4715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" iface="eth0" netns="/var/run/netns/cni-f12de3a3-791f-a7b0-3148-96d6507040eb" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.176 [INFO][4715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" iface="eth0" netns="/var/run/netns/cni-f12de3a3-791f-a7b0-3148-96d6507040eb" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.176 [INFO][4715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.176 [INFO][4715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.215 [INFO][4724] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" HandleID="k8s-pod-network.ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Workload="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.215 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.215 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.226 [WARNING][4724] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" HandleID="k8s-pod-network.ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Workload="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.226 [INFO][4724] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" HandleID="k8s-pod-network.ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Workload="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.230 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:01:07.235652 env[1322]: 2025-09-06 00:01:07.232 [INFO][4715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832" Sep 6 00:01:07.236128 env[1322]: time="2025-09-06T00:01:07.235860555Z" level=info msg="TearDown network for sandbox \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\" successfully" Sep 6 00:01:07.236128 env[1322]: time="2025-09-06T00:01:07.235894995Z" level=info msg="StopPodSandbox for \"ae212ff08fcbfa57fae646bde77e6c70d6b8b81bfb5a0c60da14677e520b5832\" returns successfully" Sep 6 00:01:07.236595 env[1322]: time="2025-09-06T00:01:07.236565045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tzrz,Uid:725f1740-cbad-4998-8e87-ef45cb66da35,Namespace:calico-system,Attempt:1,}" Sep 6 00:01:07.244181 sshd[4676]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:07.244000 audit[4676]: USER_END pid=4676 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.247232 systemd-logind[1310]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:01:07.248556 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:46354.service: Deactivated successfully. Sep 6 00:01:07.244000 audit[4676]: CRED_DISP pid=4676 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.249669 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:01:07.251332 kernel: audit: type=1106 audit(1757116867.244:485): pid=4676 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.251406 kernel: audit: type=1104 audit(1757116867.244:486): pid=4676 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:07.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.34:22-10.0.0.1:46354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:07.251338 systemd-logind[1310]: Removed session 16. Sep 6 00:01:07.348561 kubelet[2108]: I0906 00:01:07.348469 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-594cfdd89c-h4tb8" podStartSLOduration=52.830294523 podStartE2EDuration="1m2.348451422s" podCreationTimestamp="2025-09-06 00:00:05 +0000 UTC" firstStartedPulling="2025-09-06 00:00:57.38274832 +0000 UTC m=+67.374186178" lastFinishedPulling="2025-09-06 00:01:06.900905219 +0000 UTC m=+76.892343077" observedRunningTime="2025-09-06 00:01:07.348088177 +0000 UTC m=+77.339526035" watchObservedRunningTime="2025-09-06 00:01:07.348451422 +0000 UTC m=+77.339889280" Sep 6 00:01:07.354698 systemd-networkd[1097]: calif937ffd2bd4: Link UP Sep 6 00:01:07.356758 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:01:07.356917 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif937ffd2bd4: link becomes ready Sep 6 00:01:07.357041 systemd-networkd[1097]: calif937ffd2bd4: Gained carrier Sep 6 00:01:07.359000 audit[4762]: NETFILTER_CFG table=filter:117 family=2 entries=14 op=nft_register_rule pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:07.359000 audit[4762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe380bf60 a2=0 a3=1 items=0 ppid=2218 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:07.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:07.363000 audit[4762]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:07.363000 audit[4762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe380bf60 a2=0 a3=1 items=0 ppid=2218 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:07.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.280 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7tzrz-eth0 csi-node-driver- calico-system 725f1740-cbad-4998-8e87-ef45cb66da35 1227 0 2025-09-06 00:00:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7tzrz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif937ffd2bd4 [] [] }} ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Namespace="calico-system" Pod="csi-node-driver-7tzrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7tzrz-" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.281 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Namespace="calico-system" Pod="csi-node-driver-7tzrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.303 [INFO][4752] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" HandleID="k8s-pod-network.ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Workload="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.304 [INFO][4752] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" HandleID="k8s-pod-network.ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Workload="localhost-k8s-csi--node--driver--7tzrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005b04e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7tzrz", "timestamp":"2025-09-06 00:01:07.303920378 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.304 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.304 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.304 [INFO][4752] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.314 [INFO][4752] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" host="localhost" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.319 [INFO][4752] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.323 [INFO][4752] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.325 [INFO][4752] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.327 [INFO][4752] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.327 [INFO][4752] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" host="localhost" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.329 [INFO][4752] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.338 [INFO][4752] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" host="localhost" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.349 [INFO][4752] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" host="localhost" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.349 [INFO][4752] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" host="localhost" Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.349 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:01:07.374635 env[1322]: 2025-09-06 00:01:07.349 [INFO][4752] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" HandleID="k8s-pod-network.ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Workload="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.375264 env[1322]: 2025-09-06 00:01:07.352 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Namespace="calico-system" Pod="csi-node-driver-7tzrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7tzrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7tzrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"725f1740-cbad-4998-8e87-ef45cb66da35", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7tzrz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif937ffd2bd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:01:07.375264 env[1322]: 2025-09-06 00:01:07.352 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Namespace="calico-system" Pod="csi-node-driver-7tzrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.375264 env[1322]: 2025-09-06 00:01:07.352 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif937ffd2bd4 ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Namespace="calico-system" Pod="csi-node-driver-7tzrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.375264 env[1322]: 2025-09-06 00:01:07.357 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Namespace="calico-system" Pod="csi-node-driver-7tzrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.375264 env[1322]: 2025-09-06 00:01:07.358 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Namespace="calico-system" Pod="csi-node-driver-7tzrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7tzrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7tzrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"725f1740-cbad-4998-8e87-ef45cb66da35", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b", Pod:"csi-node-driver-7tzrz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif937ffd2bd4", MAC:"9e:3a:a1:22:6c:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:01:07.375264 env[1322]: 2025-09-06 00:01:07.370 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b" Namespace="calico-system" Pod="csi-node-driver-7tzrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7tzrz-eth0" Sep 6 00:01:07.385424 env[1322]: time="2025-09-06T00:01:07.385348275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:01:07.385577 env[1322]: time="2025-09-06T00:01:07.385435757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:01:07.385577 env[1322]: time="2025-09-06T00:01:07.385463077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:01:07.385911 env[1322]: time="2025-09-06T00:01:07.385659520Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b pid=4780 runtime=io.containerd.runc.v2 Sep 6 00:01:07.391000 audit[4793]: NETFILTER_CFG table=filter:119 family=2 entries=58 op=nft_register_chain pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:01:07.391000 audit[4793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27180 a0=3 a1=ffffeba11150 a2=0 a3=ffff8b58afa8 items=0 ppid=3833 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:07.391000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:01:07.415577 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:01:07.428436 env[1322]: time="2025-09-06T00:01:07.428393937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tzrz,Uid:725f1740-cbad-4998-8e87-ef45cb66da35,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b\"" Sep 6 00:01:07.429721 env[1322]: time="2025-09-06T00:01:07.429691316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 6 00:01:07.917298 systemd[1]: run-netns-cni\x2df12de3a3\x2d791f\x2da7b0\x2d3148\x2d96d6507040eb.mount: Deactivated successfully. Sep 6 00:01:08.079425 kubelet[2108]: E0906 00:01:08.078923 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:08.333443 kubelet[2108]: I0906 00:01:08.333373 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:01:08.620684 systemd-networkd[1097]: calif937ffd2bd4: Gained IPv6LL Sep 6 00:01:09.078221 env[1322]: time="2025-09-06T00:01:09.078024545Z" level=info msg="StopPodSandbox for \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\"" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.123 [INFO][4827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.124 [INFO][4827] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" iface="eth0" netns="/var/run/netns/cni-4db90406-41eb-9528-95a5-6f73fdb95aa7" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.124 [INFO][4827] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" iface="eth0" netns="/var/run/netns/cni-4db90406-41eb-9528-95a5-6f73fdb95aa7" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.124 [INFO][4827] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" iface="eth0" netns="/var/run/netns/cni-4db90406-41eb-9528-95a5-6f73fdb95aa7" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.124 [INFO][4827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.124 [INFO][4827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.143 [INFO][4836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" HandleID="k8s-pod-network.8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Workload="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.143 [INFO][4836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.143 [INFO][4836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.152 [WARNING][4836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" HandleID="k8s-pod-network.8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Workload="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.152 [INFO][4836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" HandleID="k8s-pod-network.8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Workload="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.157 [INFO][4836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:01:09.162261 env[1322]: 2025-09-06 00:01:09.160 [INFO][4827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3" Sep 6 00:01:09.164765 systemd[1]: run-netns-cni\x2d4db90406\x2d41eb\x2d9528\x2d95a5\x2d6f73fdb95aa7.mount: Deactivated successfully. Sep 6 00:01:09.166302 env[1322]: time="2025-09-06T00:01:09.166258050Z" level=info msg="TearDown network for sandbox \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\" successfully" Sep 6 00:01:09.166380 env[1322]: time="2025-09-06T00:01:09.166363132Z" level=info msg="StopPodSandbox for \"8924afbcb054b01cfa63ad59514e991fb0fe4e8c6bfc6b6166303c65a93ebef3\" returns successfully" Sep 6 00:01:09.167236 env[1322]: time="2025-09-06T00:01:09.167183503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f49f47fcf-n2r4d,Uid:39ede2cd-ddde-4eac-bd4f-184f0738c304,Namespace:calico-system,Attempt:1,}" Sep 6 00:01:09.356867 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:01:09.357015 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic6687dd32dc: link becomes ready Sep 6 00:01:09.354967 systemd-networkd[1097]: calic6687dd32dc: Link UP Sep 6 00:01:09.357048 systemd-networkd[1097]: calic6687dd32dc: Gained carrier Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.275 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0 calico-kube-controllers-6f49f47fcf- calico-system 39ede2cd-ddde-4eac-bd4f-184f0738c304 1247 0 2025-09-06 00:00:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f49f47fcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f49f47fcf-n2r4d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic6687dd32dc [] [] }} ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Namespace="calico-system" Pod="calico-kube-controllers-6f49f47fcf-n2r4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.275 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Namespace="calico-system" Pod="calico-kube-controllers-6f49f47fcf-n2r4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.301 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" HandleID="k8s-pod-network.ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Workload="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.302 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" HandleID="k8s-pod-network.ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Workload="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f49f47fcf-n2r4d", "timestamp":"2025-09-06 00:01:09.301829471 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.302 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.302 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.302 [INFO][4860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.312 [INFO][4860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" host="localhost" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.317 [INFO][4860] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.323 [INFO][4860] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.325 [INFO][4860] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.328 [INFO][4860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.328 [INFO][4860] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" host="localhost" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.332 [INFO][4860] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1 Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.341 [INFO][4860] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" host="localhost" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.349 [INFO][4860] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" host="localhost" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.349 [INFO][4860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" host="localhost" Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.349 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:01:09.379778 env[1322]: 2025-09-06 00:01:09.349 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" HandleID="k8s-pod-network.ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Workload="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.380744 env[1322]: 2025-09-06 00:01:09.352 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Namespace="calico-system" Pod="calico-kube-controllers-6f49f47fcf-n2r4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0", GenerateName:"calico-kube-controllers-6f49f47fcf-", Namespace:"calico-system", SelfLink:"", UID:"39ede2cd-ddde-4eac-bd4f-184f0738c304", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f49f47fcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f49f47fcf-n2r4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6687dd32dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:01:09.380744 env[1322]: 2025-09-06 00:01:09.352 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Namespace="calico-system" Pod="calico-kube-controllers-6f49f47fcf-n2r4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.380744 env[1322]: 2025-09-06 00:01:09.352 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6687dd32dc ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Namespace="calico-system" Pod="calico-kube-controllers-6f49f47fcf-n2r4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.380744 env[1322]: 2025-09-06 00:01:09.358 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Namespace="calico-system" Pod="calico-kube-controllers-6f49f47fcf-n2r4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.380744 env[1322]: 2025-09-06 00:01:09.367 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Namespace="calico-system" Pod="calico-kube-controllers-6f49f47fcf-n2r4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0", GenerateName:"calico-kube-controllers-6f49f47fcf-", Namespace:"calico-system", SelfLink:"", UID:"39ede2cd-ddde-4eac-bd4f-184f0738c304", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f49f47fcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1", Pod:"calico-kube-controllers-6f49f47fcf-n2r4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6687dd32dc", MAC:"2e:81:44:c8:4f:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:01:09.380744 env[1322]: 2025-09-06 00:01:09.377 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1" Namespace="calico-system" Pod="calico-kube-controllers-6f49f47fcf-n2r4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f49f47fcf--n2r4d-eth0" Sep 6 00:01:09.392574 env[1322]: time="2025-09-06T00:01:09.392371087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:01:09.392574 env[1322]: time="2025-09-06T00:01:09.392452968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:01:09.392574 env[1322]: time="2025-09-06T00:01:09.392485569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:01:09.393025 env[1322]: time="2025-09-06T00:01:09.392818813Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1 pid=4884 runtime=io.containerd.runc.v2 Sep 6 00:01:09.400000 audit[4900]: NETFILTER_CFG table=filter:120 family=2 entries=52 op=nft_register_chain pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:01:09.400000 audit[4900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffe3e377c0 a2=0 a3=ffff873a8fa8 items=0 ppid=3833 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:09.400000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:01:09.426424 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:01:09.448245 env[1322]: time="2025-09-06T00:01:09.448198837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f49f47fcf-n2r4d,Uid:39ede2cd-ddde-4eac-bd4f-184f0738c304,Namespace:calico-system,Attempt:1,} returns sandbox id \"ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1\"" Sep 6 00:01:10.485377 env[1322]: time="2025-09-06T00:01:10.485303412Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:10.486907 env[1322]: time="2025-09-06T00:01:10.486860712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:10.490976 env[1322]: time="2025-09-06T00:01:10.490938764Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:10.494103 env[1322]: time="2025-09-06T00:01:10.494070125Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:10.494691 env[1322]: time="2025-09-06T00:01:10.494661133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 6 00:01:10.497341 env[1322]: time="2025-09-06T00:01:10.497286567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 6 00:01:10.498261 env[1322]: time="2025-09-06T00:01:10.498228139Z" level=info msg="CreateContainer within sandbox \"ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 6 00:01:10.530027 env[1322]: time="2025-09-06T00:01:10.529965150Z" level=info msg="CreateContainer within sandbox \"ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b8d3c9e3c792abf6bdf8c7f77a5329e4d9120762f11f188f7e8a513d1225411a\"" Sep 6 00:01:10.530688 env[1322]: time="2025-09-06T00:01:10.530655999Z" level=info msg="StartContainer for \"b8d3c9e3c792abf6bdf8c7f77a5329e4d9120762f11f188f7e8a513d1225411a\"" Sep 6 00:01:10.602601 env[1322]: time="2025-09-06T00:01:10.602531849Z" level=info msg="StartContainer for \"b8d3c9e3c792abf6bdf8c7f77a5329e4d9120762f11f188f7e8a513d1225411a\" returns successfully" Sep 6 00:01:11.180726 systemd-networkd[1097]: calic6687dd32dc: Gained IPv6LL Sep 6 00:01:12.254017 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:46866.service. Sep 6 00:01:12.261583 kernel: kauditd_printk_skb: 13 callbacks suppressed Sep 6 00:01:12.261685 kernel: audit: type=1130 audit(1757116872.253:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.34:22-10.0.0.1:46866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:12.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.34:22-10.0.0.1:46866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:12.305000 audit[4973]: USER_ACCT pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.306355 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 46866 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:12.308563 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:12.307000 audit[4973]: CRED_ACQ pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.319482 kernel: audit: type=1101 audit(1757116872.305:493): pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.319586 kernel: audit: type=1103 audit(1757116872.307:494): pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.323027 kernel: audit: type=1006 audit(1757116872.307:495): pid=4973 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 6 00:01:12.307000 audit[4973]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc5d886f0 a2=3 a3=1 items=0 ppid=1 pid=4973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:12.330248 systemd[1]: Started session-17.scope. Sep 6 00:01:12.331912 kernel: audit: type=1300 audit(1757116872.307:495): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc5d886f0 a2=3 a3=1 items=0 ppid=1 pid=4973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:12.331966 systemd-logind[1310]: New session 17 of user core. Sep 6 00:01:12.307000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:12.334160 kernel: audit: type=1327 audit(1757116872.307:495): proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:12.338000 audit[4973]: USER_START pid=4973 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.342611 kernel: audit: type=1105 audit(1757116872.338:496): pid=4973 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.342000 audit[4976]: CRED_ACQ pid=4976 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.350695 kernel: audit: type=1103 audit(1757116872.342:497): pid=4976 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.694260 sshd[4973]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:12.695000 audit[4973]: USER_END pid=4973 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.697404 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:46866.service: Deactivated successfully. Sep 6 00:01:12.698461 systemd-logind[1310]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:01:12.698511 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:01:12.699331 systemd-logind[1310]: Removed session 17. Sep 6 00:01:12.695000 audit[4973]: CRED_DISP pid=4973 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.702170 kernel: audit: type=1106 audit(1757116872.695:498): pid=4973 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.702234 kernel: audit: type=1104 audit(1757116872.695:499): pid=4973 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:12.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.34:22-10.0.0.1:46866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:13.560090 env[1322]: time="2025-09-06T00:01:13.560038564Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:13.562314 env[1322]: time="2025-09-06T00:01:13.562287230Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:13.564104 env[1322]: time="2025-09-06T00:01:13.564078171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:13.565803 env[1322]: time="2025-09-06T00:01:13.565772710Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:13.566492 env[1322]: time="2025-09-06T00:01:13.566445638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 6 00:01:13.568773 env[1322]: time="2025-09-06T00:01:13.568726825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 6 00:01:13.587611 env[1322]: time="2025-09-06T00:01:13.587568163Z" level=info msg="CreateContainer within sandbox \"ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 6 00:01:13.603778 env[1322]: time="2025-09-06T00:01:13.603489267Z" level=info msg="CreateContainer within sandbox \"ab41b42375b7287951ad99bea43193ef93c771a0311904c2f9671933a13ab5d1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1892eab78ffea4721ff464589bf643fe55998d19a2a67410c144c5d5adb820ff\"" Sep 6 00:01:13.605250 env[1322]: time="2025-09-06T00:01:13.604653320Z" level=info msg="StartContainer for \"1892eab78ffea4721ff464589bf643fe55998d19a2a67410c144c5d5adb820ff\"" Sep 6 00:01:13.670699 env[1322]: time="2025-09-06T00:01:13.670651844Z" level=info msg="StartContainer for \"1892eab78ffea4721ff464589bf643fe55998d19a2a67410c144c5d5adb820ff\" returns successfully" Sep 6 00:01:14.385819 kubelet[2108]: I0906 00:01:14.385770 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f49f47fcf-n2r4d" podStartSLOduration=61.26777068 podStartE2EDuration="1m5.385754995s" podCreationTimestamp="2025-09-06 00:00:09 +0000 UTC" firstStartedPulling="2025-09-06 00:01:09.449667137 +0000 UTC m=+79.441104955" lastFinishedPulling="2025-09-06 00:01:13.567651412 +0000 UTC m=+83.559089270" observedRunningTime="2025-09-06 00:01:14.385603233 +0000 UTC m=+84.377041091" watchObservedRunningTime="2025-09-06 00:01:14.385754995 +0000 UTC m=+84.377192853" Sep 6 00:01:14.515201 kubelet[2108]: I0906 00:01:14.515165 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:01:14.558000 audit[5058]: NETFILTER_CFG table=filter:121 family=2 entries=13 op=nft_register_rule pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:14.558000 audit[5058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffd242cf80 a2=0 a3=1 items=0 ppid=2218 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:14.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:14.566000 audit[5058]: NETFILTER_CFG table=nat:122 family=2 entries=27 op=nft_register_chain pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:14.566000 audit[5058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffd242cf80 a2=0 a3=1 items=0 ppid=2218 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:14.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:15.425399 env[1322]: time="2025-09-06T00:01:15.425320082Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:15.427205 env[1322]: time="2025-09-06T00:01:15.427172742Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:15.430084 env[1322]: time="2025-09-06T00:01:15.430013492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:15.432327 env[1322]: time="2025-09-06T00:01:15.432253837Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:15.432611 env[1322]: time="2025-09-06T00:01:15.432582400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 6 00:01:15.435008 env[1322]: time="2025-09-06T00:01:15.434977466Z" level=info msg="CreateContainer within sandbox \"ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 6 00:01:15.447896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024904863.mount: Deactivated successfully. Sep 6 00:01:15.455164 env[1322]: time="2025-09-06T00:01:15.455104082Z" level=info msg="CreateContainer within sandbox \"ba2afcffe9ae9693fd83106f7f6ad6ee878395ed31677808a966fe2e5a0b798b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9b6e48e59863e2264f630aa3ee9ba3fb63733188d12eb38cd00ee17b46304886\"" Sep 6 00:01:15.455660 env[1322]: time="2025-09-06T00:01:15.455627127Z" level=info msg="StartContainer for \"9b6e48e59863e2264f630aa3ee9ba3fb63733188d12eb38cd00ee17b46304886\"" Sep 6 00:01:15.534106 env[1322]: time="2025-09-06T00:01:15.534052209Z" level=info msg="StartContainer for \"9b6e48e59863e2264f630aa3ee9ba3fb63733188d12eb38cd00ee17b46304886\" returns successfully" Sep 6 00:01:15.576292 systemd[1]: run-containerd-runc-k8s.io-9b6e48e59863e2264f630aa3ee9ba3fb63733188d12eb38cd00ee17b46304886-runc.6GGdhi.mount: Deactivated successfully. Sep 6 00:01:16.215265 kubelet[2108]: I0906 00:01:16.215230 2108 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 6 00:01:16.215714 kubelet[2108]: I0906 00:01:16.215698 2108 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 6 00:01:16.392745 kubelet[2108]: I0906 00:01:16.392665 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7tzrz" podStartSLOduration=59.388468483 podStartE2EDuration="1m7.392645981s" podCreationTimestamp="2025-09-06 00:00:09 +0000 UTC" firstStartedPulling="2025-09-06 00:01:07.429506714 +0000 UTC m=+77.420944572" lastFinishedPulling="2025-09-06 00:01:15.433684212 +0000 UTC m=+85.425122070" observedRunningTime="2025-09-06 00:01:16.390086275 +0000 UTC m=+86.381524133" watchObservedRunningTime="2025-09-06 00:01:16.392645981 +0000 UTC m=+86.384083839" Sep 6 00:01:17.696312 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:46868.service. Sep 6 00:01:17.701174 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:01:17.701267 kernel: audit: type=1130 audit(1757116877.695:503): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.34:22-10.0.0.1:46868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:17.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.34:22-10.0.0.1:46868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:17.742000 audit[5102]: USER_ACCT pid=5102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.743712 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 46868 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:17.746587 kernel: audit: type=1101 audit(1757116877.742:504): pid=5102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.747000 audit[5102]: CRED_ACQ pid=5102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.748474 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:17.752746 kernel: audit: type=1103 audit(1757116877.747:505): pid=5102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.752801 kernel: audit: type=1006 audit(1757116877.747:506): pid=5102 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 6 00:01:17.752822 kernel: audit: type=1300 audit(1757116877.747:506): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffae4ace0 a2=3 a3=1 items=0 ppid=1 pid=5102 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:17.747000 audit[5102]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffae4ace0 a2=3 a3=1 items=0 ppid=1 pid=5102 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:17.747000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:17.757606 kernel: audit: type=1327 audit(1757116877.747:506): proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:17.758846 systemd-logind[1310]: New session 18 of user core. Sep 6 00:01:17.759713 systemd[1]: Started session-18.scope. Sep 6 00:01:17.763000 audit[5102]: USER_START pid=5102 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.767000 audit[5105]: CRED_ACQ pid=5105 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.770714 kernel: audit: type=1105 audit(1757116877.763:507): pid=5102 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.770773 kernel: audit: type=1103 audit(1757116877.767:508): pid=5105 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.928717 sshd[5102]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:17.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.34:22-10.0.0.1:46882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:17.930124 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:46882.service. Sep 6 00:01:17.933569 kernel: audit: type=1130 audit(1757116877.929:509): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.34:22-10.0.0.1:46882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:17.934000 audit[5102]: USER_END pid=5102 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.938129 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:46868.service: Deactivated successfully. Sep 6 00:01:17.934000 audit[5102]: CRED_DISP pid=5102 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.34:22-10.0.0.1:46868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:17.939390 systemd-logind[1310]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:01:17.939452 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:01:17.939555 kernel: audit: type=1106 audit(1757116877.934:510): pid=5102 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.940181 systemd-logind[1310]: Removed session 18. Sep 6 00:01:17.970000 audit[5114]: USER_ACCT pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.971199 sshd[5114]: Accepted publickey for core from 10.0.0.1 port 46882 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:17.971000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.971000 audit[5114]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9180ff0 a2=3 a3=1 items=0 ppid=1 pid=5114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:17.971000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:17.972797 sshd[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:17.976165 systemd-logind[1310]: New session 19 of user core. Sep 6 00:01:17.977028 systemd[1]: Started session-19.scope. Sep 6 00:01:17.980000 audit[5114]: USER_START pid=5114 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:17.981000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:18.220877 sshd[5114]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:18.222000 audit[5114]: USER_END pid=5114 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:18.222000 audit[5114]: CRED_DISP pid=5114 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:18.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.34:22-10.0.0.1:46896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:18.223244 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:46896.service. Sep 6 00:01:18.226077 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:46882.service: Deactivated successfully. Sep 6 00:01:18.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.34:22-10.0.0.1:46882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:18.227417 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:01:18.227454 systemd-logind[1310]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:01:18.228897 systemd-logind[1310]: Removed session 19. Sep 6 00:01:18.265000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:18.266668 sshd[5126]: Accepted publickey for core from 10.0.0.1 port 46896 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:18.267000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:18.267000 audit[5126]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd5f8df0 a2=3 a3=1 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:18.267000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:18.268298 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:18.272587 systemd-logind[1310]: New session 20 of user core. Sep 6 00:01:18.273087 systemd[1]: Started session-20.scope. Sep 6 00:01:18.277000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:18.279000 audit[5131]: CRED_ACQ pid=5131 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:19.077665 kubelet[2108]: E0906 00:01:19.077618 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:19.813000 audit[5148]: NETFILTER_CFG table=filter:123 family=2 entries=24 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:19.813000 audit[5148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13432 a0=3 a1=fffff403d070 a2=0 a3=1 items=0 ppid=2218 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:19.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:19.816856 sshd[5126]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:19.816948 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:46906.service. Sep 6 00:01:19.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.34:22-10.0.0.1:46906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:19.822000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:19.822000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:19.822000 audit[5148]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:19.822000 audit[5148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff403d070 a2=0 a3=1 items=0 ppid=2218 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:19.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:19.824730 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:46896.service: Deactivated successfully. Sep 6 00:01:19.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.34:22-10.0.0.1:46896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:19.826058 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:01:19.826505 systemd-logind[1310]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:01:19.827490 systemd-logind[1310]: Removed session 20. Sep 6 00:01:19.842000 audit[5154]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_rule pid=5154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:19.842000 audit[5154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13432 a0=3 a1=ffffd008f3e0 a2=0 a3=1 items=0 ppid=2218 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:19.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:19.851000 audit[5154]: NETFILTER_CFG table=nat:126 family=2 entries=22 op=nft_register_rule pid=5154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:19.851000 audit[5154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffd008f3e0 a2=0 a3=1 items=0 ppid=2218 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:19.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:19.876000 audit[5149]: USER_ACCT pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:19.877307 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 46906 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:19.877000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:19.877000 audit[5149]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd09b0fb0 a2=3 a3=1 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:19.877000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:19.878442 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:19.882827 systemd[1]: Started session-21.scope. Sep 6 00:01:19.883032 systemd-logind[1310]: New session 21 of user core. Sep 6 00:01:19.886000 audit[5149]: USER_START pid=5149 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:19.888000 audit[5156]: CRED_ACQ pid=5156 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:20.357570 sshd[5149]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:20.360189 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:42890.service. Sep 6 00:01:20.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.34:22-10.0.0.1:42890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:20.361000 audit[5149]: USER_END pid=5149 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:20.361000 audit[5149]: CRED_DISP pid=5149 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:20.368401 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:46906.service: Deactivated successfully. Sep 6 00:01:20.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.34:22-10.0.0.1:46906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:20.369778 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:01:20.370580 systemd-logind[1310]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:01:20.372570 systemd-logind[1310]: Removed session 21. Sep 6 00:01:20.409000 audit[5163]: USER_ACCT pid=5163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:20.410559 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 42890 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:20.411000 audit[5163]: CRED_ACQ pid=5163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:20.411000 audit[5163]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9117470 a2=3 a3=1 items=0 ppid=1 pid=5163 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:20.411000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:20.412210 sshd[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:20.416525 systemd[1]: Started session-22.scope. Sep 6 00:01:20.416735 systemd-logind[1310]: New session 22 of user core. Sep 6 00:01:20.419000 audit[5163]: USER_START pid=5163 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:20.421000 audit[5168]: CRED_ACQ pid=5168 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:20.536644 sshd[5163]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:20.537000 audit[5163]: USER_END pid=5163 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:20.537000 audit[5163]: CRED_DISP pid=5163 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:20.539409 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:42890.service: Deactivated successfully. Sep 6 00:01:20.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.34:22-10.0.0.1:42890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:20.540448 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:01:20.541150 systemd-logind[1310]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:01:20.542063 systemd-logind[1310]: Removed session 22. Sep 6 00:01:23.655000 audit[5180]: NETFILTER_CFG table=filter:127 family=2 entries=35 op=nft_register_rule pid=5180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:23.657934 kernel: kauditd_printk_skb: 57 callbacks suppressed Sep 6 00:01:23.658016 kernel: audit: type=1325 audit(1757116883.655:552): table=filter:127 family=2 entries=35 op=nft_register_rule pid=5180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:23.655000 audit[5180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12688 a0=3 a1=fffff82b7e00 a2=0 a3=1 items=0 ppid=2218 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:23.663680 kernel: audit: type=1300 audit(1757116883.655:552): arch=c00000b7 syscall=211 success=yes exit=12688 a0=3 a1=fffff82b7e00 a2=0 a3=1 items=0 ppid=2218 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:23.663760 kernel: audit: type=1327 audit(1757116883.655:552): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:23.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:23.665000 audit[5180]: NETFILTER_CFG table=nat:128 family=2 entries=29 op=nft_register_chain pid=5180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:23.665000 audit[5180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=fffff82b7e00 a2=0 a3=1 items=0 ppid=2218 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:23.673650 kernel: audit: type=1325 audit(1757116883.665:553): table=nat:128 family=2 entries=29 op=nft_register_chain pid=5180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:23.673722 kernel: audit: type=1300 audit(1757116883.665:553): arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=fffff82b7e00 a2=0 a3=1 items=0 ppid=2218 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:23.673740 kernel: audit: type=1327 audit(1757116883.665:553): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:23.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:25.540169 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:42902.service. Sep 6 00:01:25.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.34:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:25.543563 kernel: audit: type=1130 audit(1757116885.538:554): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.34:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:25.583000 audit[5181]: USER_ACCT pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:25.585696 sshd[5181]: Accepted publickey for core from 10.0.0.1 port 42902 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:25.586980 sshd[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:25.585000 audit[5181]: CRED_ACQ pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:25.590872 systemd-logind[1310]: New session 23 of user core. Sep 6 00:01:25.591292 systemd[1]: Started session-23.scope. Sep 6 00:01:25.593577 kernel: audit: type=1101 audit(1757116885.583:555): pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:25.593658 kernel: audit: type=1103 audit(1757116885.585:556): pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:25.593676 kernel: audit: type=1006 audit(1757116885.585:557): pid=5181 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 6 00:01:25.585000 audit[5181]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe33239a0 a2=3 a3=1 items=0 ppid=1 pid=5181 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:25.585000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:25.596000 audit[5181]: USER_START pid=5181 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:25.598000 audit[5184]: CRED_ACQ pid=5184 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:25.715273 sshd[5181]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:25.714000 audit[5181]: USER_END pid=5181 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:25.714000 audit[5181]: CRED_DISP pid=5181 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:25.720496 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:42902.service: Deactivated successfully. Sep 6 00:01:25.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.34:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:25.721720 systemd-logind[1310]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:01:25.721784 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:01:25.723180 systemd-logind[1310]: Removed session 23. Sep 6 00:01:26.078186 kubelet[2108]: E0906 00:01:26.077759 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:26.649000 audit[5198]: NETFILTER_CFG table=filter:129 family=2 entries=22 op=nft_register_rule pid=5198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:26.649000 audit[5198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffcc072ea0 a2=0 a3=1 items=0 ppid=2218 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:26.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:26.656000 audit[5198]: NETFILTER_CFG table=nat:130 family=2 entries=108 op=nft_register_chain pid=5198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:26.656000 audit[5198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50220 a0=3 a1=ffffcc072ea0 a2=0 a3=1 items=0 ppid=2218 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:26.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:30.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.34:22-10.0.0.1:58556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:30.718606 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:58556.service. Sep 6 00:01:30.721891 kernel: kauditd_printk_skb: 13 callbacks suppressed Sep 6 00:01:30.721992 kernel: audit: type=1130 audit(1757116890.717:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.34:22-10.0.0.1:58556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:30.762000 audit[5245]: USER_ACCT pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.764095 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 58556 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:30.765871 sshd[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:30.763000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.769765 kernel: audit: type=1101 audit(1757116890.762:566): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.769824 kernel: audit: type=1103 audit(1757116890.763:567): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.769842 kernel: audit: type=1006 audit(1757116890.764:568): pid=5245 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 6 00:01:30.764000 audit[5245]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0e2c170 a2=3 a3=1 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:30.774061 kernel: audit: type=1300 audit(1757116890.764:568): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0e2c170 a2=3 a3=1 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:30.774113 kernel: audit: type=1327 audit(1757116890.764:568): proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:30.764000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:30.778105 systemd-logind[1310]: New session 24 of user core. Sep 6 00:01:30.778854 systemd[1]: Started session-24.scope. Sep 6 00:01:30.786000 audit[5245]: USER_START pid=5245 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.788000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.793792 kernel: audit: type=1105 audit(1757116890.786:569): pid=5245 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.793855 kernel: audit: type=1103 audit(1757116890.788:570): pid=5248 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.943473 sshd[5245]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:30.942000 audit[5245]: USER_END pid=5245 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.947113 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:58556.service: Deactivated successfully. Sep 6 00:01:30.943000 audit[5245]: CRED_DISP pid=5245 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.950009 kernel: audit: type=1106 audit(1757116890.942:571): pid=5245 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.950121 kernel: audit: type=1104 audit(1757116890.943:572): pid=5245 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:30.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.34:22-10.0.0.1:58556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:30.949996 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:01:30.950468 systemd-logind[1310]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:01:30.952128 systemd-logind[1310]: Removed session 24. Sep 6 00:01:35.057592 kubelet[2108]: I0906 00:01:35.057514 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:01:35.094000 audit[5260]: NETFILTER_CFG table=filter:131 family=2 entries=10 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:35.094000 audit[5260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffda68a3e0 a2=0 a3=1 items=0 ppid=2218 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:35.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:35.104000 audit[5260]: NETFILTER_CFG table=nat:132 family=2 entries=60 op=nft_register_chain pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:35.104000 audit[5260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21220 a0=3 a1=ffffda68a3e0 a2=0 a3=1 items=0 ppid=2218 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:35.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:35.947156 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:58572.service. Sep 6 00:01:35.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.34:22-10.0.0.1:58572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.948695 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:01:35.948770 kernel: audit: type=1130 audit(1757116895.945:576): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.34:22-10.0.0.1:58572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.987000 audit[5261]: USER_ACCT pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:35.989612 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 58572 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:35.992558 kernel: audit: type=1101 audit(1757116895.987:577): pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:35.991000 audit[5261]: CRED_ACQ pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:35.993105 sshd[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:35.997628 kernel: audit: type=1103 audit(1757116895.991:578): pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:35.997687 kernel: audit: type=1006 audit(1757116895.991:579): pid=5261 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 6 00:01:35.991000 audit[5261]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe591c460 a2=3 a3=1 items=0 ppid=1 pid=5261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:36.002244 kernel: audit: type=1300 audit(1757116895.991:579): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe591c460 a2=3 a3=1 items=0 ppid=1 pid=5261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:35.991000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:36.006225 kernel: audit: type=1327 audit(1757116895.991:579): proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:36.006092 systemd-logind[1310]: New session 25 of user core. Sep 6 00:01:36.006593 systemd[1]: Started session-25.scope. Sep 6 00:01:36.010000 audit[5261]: USER_START pid=5261 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:36.017942 kernel: audit: type=1105 audit(1757116896.010:580): pid=5261 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:36.018004 kernel: audit: type=1103 audit(1757116896.010:581): pid=5264 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:36.010000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:36.212086 sshd[5261]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:36.211000 audit[5261]: USER_END pid=5261 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:36.215550 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:58572.service: Deactivated successfully. Sep 6 00:01:36.211000 audit[5261]: CRED_DISP pid=5261 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:36.216731 systemd-logind[1310]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:01:36.216889 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:01:36.217875 systemd-logind[1310]: Removed session 25. Sep 6 00:01:36.219433 kernel: audit: type=1106 audit(1757116896.211:582): pid=5261 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:36.219559 kernel: audit: type=1104 audit(1757116896.211:583): pid=5261 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:36.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.34:22-10.0.0.1:58572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.799000 audit[5305]: NETFILTER_CFG table=filter:133 family=2 entries=9 op=nft_register_rule pid=5305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:40.799000 audit[5305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffffac95b20 a2=0 a3=1 items=0 ppid=2218 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:40.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:40.807000 audit[5305]: NETFILTER_CFG table=nat:134 family=2 entries=55 op=nft_register_chain pid=5305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:01:40.807000 audit[5305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20100 a0=3 a1=fffffac95b20 a2=0 a3=1 items=0 ppid=2218 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:40.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:01:41.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.34:22-10.0.0.1:33186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.213579 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:33186.service. Sep 6 00:01:41.217196 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:01:41.217291 kernel: audit: type=1130 audit(1757116901.212:587): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.34:22-10.0.0.1:33186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.258000 audit[5329]: USER_ACCT pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.260208 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 33186 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:41.261860 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:41.259000 audit[5329]: CRED_ACQ pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.265211 kernel: audit: type=1101 audit(1757116901.258:588): pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.265286 kernel: audit: type=1103 audit(1757116901.259:589): pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.265543 kernel: audit: type=1006 audit(1757116901.259:590): pid=5329 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 6 00:01:41.266041 systemd-logind[1310]: New session 26 of user core. Sep 6 00:01:41.267469 kernel: audit: type=1300 audit(1757116901.259:590): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbe20fc0 a2=3 a3=1 items=0 ppid=1 pid=5329 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:41.259000 audit[5329]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbe20fc0 a2=3 a3=1 items=0 ppid=1 pid=5329 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:41.266847 systemd[1]: Started session-26.scope. Sep 6 00:01:41.270059 kernel: audit: type=1327 audit(1757116901.259:590): proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:41.259000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:01:41.272000 audit[5329]: USER_START pid=5329 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.274000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.279420 kernel: audit: type=1105 audit(1757116901.272:591): pid=5329 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.279502 kernel: audit: type=1103 audit(1757116901.274:592): pid=5332 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.447791 sshd[5329]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:41.447000 audit[5329]: USER_END pid=5329 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.454489 kernel: audit: type=1106 audit(1757116901.447:593): pid=5329 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.454573 kernel: audit: type=1104 audit(1757116901.447:594): pid=5329 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.447000 audit[5329]: CRED_DISP pid=5329 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:01:41.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.34:22-10.0.0.1:33186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.455050 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:33186.service: Deactivated successfully. Sep 6 00:01:41.455926 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:01:41.460338 systemd-logind[1310]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:01:41.466601 systemd-logind[1310]: Removed session 26.