Aug 12 23:57:30.767352 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 12 23:57:30.767379 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Aug 12 22:50:30 -00 2025 Aug 12 23:57:30.767387 kernel: efi: EFI v2.70 by EDK II Aug 12 23:57:30.767393 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Aug 12 23:57:30.767398 kernel: random: crng init done Aug 12 23:57:30.767403 kernel: ACPI: Early table checksum verification disabled Aug 12 23:57:30.767410 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Aug 12 23:57:30.767416 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:57:30.767422 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:30.767427 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:30.767433 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:30.767438 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:30.767443 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:30.767449 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:30.767459 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:30.767465 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:30.767471 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:57:30.767476 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 12 23:57:30.767482 kernel: NUMA: Failed to initialise from firmware Aug 12 23:57:30.767488 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:57:30.767493 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Aug 12 23:57:30.767499 kernel: Zone ranges: Aug 12 23:57:30.767511 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:57:30.767518 kernel: DMA32 empty Aug 12 23:57:30.767528 kernel: Normal empty Aug 12 23:57:30.767534 kernel: Movable zone start for each node Aug 12 23:57:30.767540 kernel: Early memory node ranges Aug 12 23:57:30.767545 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Aug 12 23:57:30.767551 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Aug 12 23:57:30.767557 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Aug 12 23:57:30.767563 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Aug 12 23:57:30.767568 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Aug 12 23:57:30.767574 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Aug 12 23:57:30.767579 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Aug 12 23:57:30.767585 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:57:30.767592 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 12 23:57:30.767598 kernel: psci: probing for conduit method from ACPI. Aug 12 23:57:30.767604 kernel: psci: PSCIv1.1 detected in firmware. Aug 12 23:57:30.767609 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:57:30.767615 kernel: psci: Trusted OS migration not required Aug 12 23:57:30.767644 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:57:30.767651 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 12 23:57:30.767659 kernel: ACPI: SRAT not present Aug 12 23:57:30.767665 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Aug 12 23:57:30.767671 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Aug 12 23:57:30.767677 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 12 23:57:30.767683 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:57:30.767689 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:57:30.767695 kernel: CPU features: detected: Hardware dirty bit management Aug 12 23:57:30.767701 kernel: CPU features: detected: Spectre-v4 Aug 12 23:57:30.767707 kernel: CPU features: detected: Spectre-BHB Aug 12 23:57:30.767716 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 12 23:57:30.767722 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 12 23:57:30.767728 kernel: CPU features: detected: ARM erratum 1418040 Aug 12 23:57:30.767734 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 12 23:57:30.767740 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 12 23:57:30.767746 kernel: Policy zone: DMA Aug 12 23:57:30.767754 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 12 23:57:30.767760 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:57:30.767766 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:57:30.767772 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:57:30.767778 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:57:30.767786 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Aug 12 23:57:30.767792 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:57:30.767798 kernel: trace event string verifier disabled Aug 12 23:57:30.767805 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:57:30.767811 kernel: rcu: RCU event tracing is enabled. Aug 12 23:57:30.767817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:57:30.767824 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:57:30.767830 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:57:30.767836 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:57:30.767843 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:57:30.767849 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:57:30.767856 kernel: GICv3: 256 SPIs implemented Aug 12 23:57:30.767862 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:57:30.767868 kernel: GICv3: Distributor has no Range Selector support Aug 12 23:57:30.767874 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:57:30.767880 kernel: GICv3: 16 PPIs implemented Aug 12 23:57:30.767887 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 12 23:57:30.767893 kernel: ACPI: SRAT not present Aug 12 23:57:30.767899 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 12 23:57:30.767905 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:57:30.767911 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:57:30.767917 kernel: GICv3: using LPI property table @0x00000000400d0000 Aug 12 23:57:30.767923 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Aug 12 23:57:30.767931 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:57:30.767937 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 12 23:57:30.767943 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 12 23:57:30.767949 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 12 23:57:30.767956 kernel: arm-pv: using stolen time PV Aug 12 23:57:30.767962 kernel: Console: colour dummy device 80x25 Aug 12 23:57:30.767968 kernel: ACPI: Core revision 20210730 Aug 12 23:57:30.767975 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 12 23:57:30.767981 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:57:30.767987 kernel: LSM: Security Framework initializing Aug 12 23:57:30.767995 kernel: SELinux: Initializing. Aug 12 23:57:30.768001 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:57:30.768008 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:57:30.768014 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:57:30.768020 kernel: Platform MSI: ITS@0x8080000 domain created Aug 12 23:57:30.768027 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 12 23:57:30.768033 kernel: Remapping and enabling EFI services. Aug 12 23:57:30.768039 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:57:30.768045 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:57:30.768053 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 12 23:57:30.768059 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Aug 12 23:57:30.768066 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:57:30.768072 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 12 23:57:30.768079 kernel: Detected PIPT I-cache on CPU2 Aug 12 23:57:30.768085 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 12 23:57:30.768092 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Aug 12 23:57:30.768098 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:57:30.768104 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 12 23:57:30.768110 kernel: Detected PIPT I-cache on CPU3 Aug 12 23:57:30.768118 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 12 23:57:30.768124 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Aug 12 23:57:30.768130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:57:30.768137 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 12 23:57:30.768148 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:57:30.768155 kernel: SMP: Total of 4 processors activated. Aug 12 23:57:30.768162 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:57:30.768168 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 12 23:57:30.768175 kernel: CPU features: detected: Common not Private translations Aug 12 23:57:30.768182 kernel: CPU features: detected: CRC32 instructions Aug 12 23:57:30.768188 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 12 23:57:30.768195 kernel: CPU features: detected: LSE atomic instructions Aug 12 23:57:30.768203 kernel: CPU features: detected: Privileged Access Never Aug 12 23:57:30.768209 kernel: CPU features: detected: RAS Extension Support Aug 12 23:57:30.768216 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 12 23:57:30.768222 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:57:30.768229 kernel: alternatives: patching kernel code Aug 12 23:57:30.768237 kernel: devtmpfs: initialized Aug 12 23:57:30.768243 kernel: KASLR enabled Aug 12 23:57:30.768250 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:57:30.768257 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:57:30.768263 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:57:30.768270 kernel: SMBIOS 3.0.0 present. Aug 12 23:57:30.768276 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Aug 12 23:57:30.768283 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:57:30.768289 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:57:30.768297 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:57:30.768304 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:57:30.768310 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:57:30.768317 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 Aug 12 23:57:30.768324 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:57:30.768330 kernel: cpuidle: using governor menu Aug 12 23:57:30.768337 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:57:30.768343 kernel: ASID allocator initialised with 32768 entries Aug 12 23:57:30.768350 kernel: ACPI: bus type PCI registered Aug 12 23:57:30.768358 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:57:30.768364 kernel: Serial: AMBA PL011 UART driver Aug 12 23:57:30.768371 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:57:30.768378 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:57:30.768384 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:57:30.768391 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:57:30.768397 kernel: cryptd: max_cpu_qlen set to 1000 Aug 12 23:57:30.768404 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:57:30.768410 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:57:30.768418 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:57:30.768424 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:57:30.768431 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 12 23:57:30.768437 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 12 23:57:30.768443 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 12 23:57:30.768450 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:57:30.768457 kernel: ACPI: Interpreter enabled Aug 12 23:57:30.768463 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:57:30.768470 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:57:30.768478 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 12 23:57:30.768484 kernel: printk: console [ttyAMA0] enabled Aug 12 23:57:30.768491 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:57:30.768640 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:57:30.768712 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:57:30.768770 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:57:30.768827 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 12 23:57:30.769316 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 12 23:57:30.769336 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 12 23:57:30.769343 kernel: PCI host bridge to bus 0000:00 Aug 12 23:57:30.769437 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 12 23:57:30.769497 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:57:30.769569 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 12 23:57:30.769622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:57:30.769721 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 12 23:57:30.769795 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 12 23:57:30.769859 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 12 23:57:30.769917 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 12 23:57:30.769979 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:57:30.770041 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:57:30.770691 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 12 23:57:30.770806 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 12 23:57:30.770874 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 12 23:57:30.770931 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:57:30.770986 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 12 23:57:30.770996 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:57:30.771003 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:57:30.771010 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:57:30.771021 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:57:30.771028 kernel: iommu: Default domain type: Translated Aug 12 23:57:30.771035 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:57:30.771043 kernel: vgaarb: loaded Aug 12 23:57:30.771050 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 12 23:57:30.771057 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 12 23:57:30.771064 kernel: PTP clock support registered Aug 12 23:57:30.771070 kernel: Registered efivars operations Aug 12 23:57:30.771078 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:57:30.771086 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:57:30.771094 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:57:30.771101 kernel: pnp: PnP ACPI init Aug 12 23:57:30.771174 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 12 23:57:30.771187 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:57:30.771193 kernel: NET: Registered PF_INET protocol family Aug 12 23:57:30.771200 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:57:30.771209 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:57:30.771216 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:57:30.771224 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:57:30.771233 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 12 23:57:30.771240 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:57:30.771247 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:57:30.771254 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:57:30.771261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:57:30.771267 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:57:30.771274 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 12 23:57:30.771281 kernel: kvm [1]: HYP mode not available Aug 12 23:57:30.771288 kernel: Initialise system trusted keyrings Aug 12 23:57:30.771296 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:57:30.771302 kernel: Key type asymmetric registered Aug 12 23:57:30.771308 kernel: Asymmetric key parser 'x509' registered Aug 12 23:57:30.771315 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 12 23:57:30.771322 kernel: io scheduler mq-deadline registered Aug 12 23:57:30.771328 kernel: io scheduler kyber registered Aug 12 23:57:30.771335 kernel: io scheduler bfq registered Aug 12 23:57:30.771341 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:57:30.771349 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:57:30.771356 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:57:30.771417 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 12 23:57:30.771426 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:57:30.771433 kernel: thunder_xcv, ver 1.0 Aug 12 23:57:30.771439 kernel: thunder_bgx, ver 1.0 Aug 12 23:57:30.771446 kernel: nicpf, ver 1.0 Aug 12 23:57:30.771453 kernel: nicvf, ver 1.0 Aug 12 23:57:30.771532 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:57:30.771591 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:57:30 UTC (1755043050) Aug 12 23:57:30.771600 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:57:30.771607 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:57:30.771614 kernel: Segment Routing with IPv6 Aug 12 23:57:30.771620 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:57:30.771637 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:57:30.771644 kernel: Key type dns_resolver registered Aug 12 23:57:30.771650 kernel: registered taskstats version 1 Aug 12 23:57:30.771659 kernel: Loading compiled-in X.509 certificates Aug 12 23:57:30.771666 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 72b807ae6dac6ab18c2f4ab9460d3472cf28c19d' Aug 12 23:57:30.771672 kernel: Key type .fscrypt registered Aug 12 23:57:30.771679 kernel: Key type fscrypt-provisioning registered Aug 12 23:57:30.771685 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:57:30.771692 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:57:30.771699 kernel: ima: No architecture policies found Aug 12 23:57:30.771706 kernel: clk: Disabling unused clocks Aug 12 23:57:30.771713 kernel: Freeing unused kernel memory: 36416K Aug 12 23:57:30.771721 kernel: Run /init as init process Aug 12 23:57:30.771727 kernel: with arguments: Aug 12 23:57:30.771734 kernel: /init Aug 12 23:57:30.771740 kernel: with environment: Aug 12 23:57:30.771747 kernel: HOME=/ Aug 12 23:57:30.771753 kernel: TERM=linux Aug 12 23:57:30.771760 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:57:30.771768 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 12 23:57:30.771778 systemd[1]: Detected virtualization kvm. Aug 12 23:57:30.771786 systemd[1]: Detected architecture arm64. Aug 12 23:57:30.771793 systemd[1]: Running in initrd. Aug 12 23:57:30.771800 systemd[1]: No hostname configured, using default hostname. Aug 12 23:57:30.771807 systemd[1]: Hostname set to . Aug 12 23:57:30.771814 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:57:30.771821 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:57:30.771828 systemd[1]: Started systemd-ask-password-console.path. Aug 12 23:57:30.771837 systemd[1]: Reached target cryptsetup.target. Aug 12 23:57:30.771844 systemd[1]: Reached target paths.target. Aug 12 23:57:30.771852 systemd[1]: Reached target slices.target. Aug 12 23:57:30.771859 systemd[1]: Reached target swap.target. Aug 12 23:57:30.771866 systemd[1]: Reached target timers.target. Aug 12 23:57:30.771873 systemd[1]: Listening on iscsid.socket. Aug 12 23:57:30.771880 systemd[1]: Listening on iscsiuio.socket. Aug 12 23:57:30.771888 systemd[1]: Listening on systemd-journald-audit.socket. Aug 12 23:57:30.771895 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 12 23:57:30.771903 systemd[1]: Listening on systemd-journald.socket. Aug 12 23:57:30.771910 systemd[1]: Listening on systemd-networkd.socket. Aug 12 23:57:30.771917 systemd[1]: Listening on systemd-udevd-control.socket. Aug 12 23:57:30.771924 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 12 23:57:30.771931 systemd[1]: Reached target sockets.target. Aug 12 23:57:30.771938 systemd[1]: Starting kmod-static-nodes.service... Aug 12 23:57:30.771945 systemd[1]: Finished network-cleanup.service. Aug 12 23:57:30.771954 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:57:30.771961 systemd[1]: Starting systemd-journald.service... Aug 12 23:57:30.771968 systemd[1]: Starting systemd-modules-load.service... Aug 12 23:57:30.771976 systemd[1]: Starting systemd-resolved.service... Aug 12 23:57:30.771983 systemd[1]: Starting systemd-vconsole-setup.service... Aug 12 23:57:30.771991 systemd[1]: Finished kmod-static-nodes.service. Aug 12 23:57:30.771998 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:57:30.772005 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 12 23:57:30.772013 systemd[1]: Finished systemd-vconsole-setup.service. Aug 12 23:57:30.772022 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 12 23:57:30.772030 systemd[1]: Starting dracut-cmdline-ask.service... Aug 12 23:57:30.772042 systemd-journald[290]: Journal started Aug 12 23:57:30.772088 systemd-journald[290]: Runtime Journal (/run/log/journal/ea71e6590d194fcfbe1d8a56556f3751) is 6.0M, max 48.7M, 42.6M free. Aug 12 23:57:30.753335 systemd-modules-load[291]: Inserted module 'overlay' Aug 12 23:57:30.774675 systemd[1]: Started systemd-journald.service. Aug 12 23:57:30.774710 kernel: audit: type=1130 audit(1755043050.773:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.782426 systemd-resolved[292]: Positive Trust Anchors: Aug 12 23:57:30.783321 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:57:30.782442 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:57:30.782473 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 12 23:57:30.791026 kernel: audit: type=1130 audit(1755043050.784:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.783885 systemd[1]: Finished dracut-cmdline-ask.service. Aug 12 23:57:30.793015 kernel: Bridge firewalling registered Aug 12 23:57:30.786200 systemd[1]: Starting dracut-cmdline.service... Aug 12 23:57:30.792306 systemd-modules-load[291]: Inserted module 'br_netfilter' Aug 12 23:57:30.798721 kernel: audit: type=1130 audit(1755043050.794:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.793020 systemd-resolved[292]: Defaulting to hostname 'linux'. Aug 12 23:57:30.794720 systemd[1]: Started systemd-resolved.service. Aug 12 23:57:30.795438 systemd[1]: Reached target nss-lookup.target. Aug 12 23:57:30.801388 dracut-cmdline[310]: dracut-dracut-053 Aug 12 23:57:30.804346 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 12 23:57:30.808653 kernel: SCSI subsystem initialized Aug 12 23:57:30.816362 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:57:30.816413 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:57:30.816425 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 12 23:57:30.818725 systemd-modules-load[291]: Inserted module 'dm_multipath' Aug 12 23:57:30.819501 systemd[1]: Finished systemd-modules-load.service. Aug 12 23:57:30.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.822651 kernel: audit: type=1130 audit(1755043050.819:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.821051 systemd[1]: Starting systemd-sysctl.service... Aug 12 23:57:30.829973 systemd[1]: Finished systemd-sysctl.service. Aug 12 23:57:30.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.833667 kernel: audit: type=1130 audit(1755043050.830:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.877700 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:57:30.889766 kernel: iscsi: registered transport (tcp) Aug 12 23:57:30.914661 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:57:30.914722 kernel: QLogic iSCSI HBA Driver Aug 12 23:57:30.967260 systemd[1]: Finished dracut-cmdline.service. Aug 12 23:57:30.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:30.969209 systemd[1]: Starting dracut-pre-udev.service... Aug 12 23:57:30.971299 kernel: audit: type=1130 audit(1755043050.967:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:31.022659 kernel: raid6: neonx8 gen() 13687 MB/s Aug 12 23:57:31.039677 kernel: raid6: neonx8 xor() 10801 MB/s Aug 12 23:57:31.056674 kernel: raid6: neonx4 gen() 13526 MB/s Aug 12 23:57:31.073665 kernel: raid6: neonx4 xor() 11233 MB/s Aug 12 23:57:31.090763 kernel: raid6: neonx2 gen() 13004 MB/s Aug 12 23:57:31.107674 kernel: raid6: neonx2 xor() 10182 MB/s Aug 12 23:57:31.125671 kernel: raid6: neonx1 gen() 11144 MB/s Aug 12 23:57:31.142672 kernel: raid6: neonx1 xor() 8753 MB/s Aug 12 23:57:31.159668 kernel: raid6: int64x8 gen() 6229 MB/s Aug 12 23:57:31.176673 kernel: raid6: int64x8 xor() 3531 MB/s Aug 12 23:57:31.193673 kernel: raid6: int64x4 gen() 7209 MB/s Aug 12 23:57:31.210680 kernel: raid6: int64x4 xor() 3845 MB/s Aug 12 23:57:31.227661 kernel: raid6: int64x2 gen() 6149 MB/s Aug 12 23:57:31.247561 kernel: raid6: int64x2 xor() 3314 MB/s Aug 12 23:57:31.261686 kernel: raid6: int64x1 gen() 5028 MB/s Aug 12 23:57:31.278969 kernel: raid6: int64x1 xor() 2638 MB/s Aug 12 23:57:31.279028 kernel: raid6: using algorithm neonx8 gen() 13687 MB/s Aug 12 23:57:31.279038 kernel: raid6: .... xor() 10801 MB/s, rmw enabled Aug 12 23:57:31.279047 kernel: raid6: using neon recovery algorithm Aug 12 23:57:31.289667 kernel: xor: measuring software checksum speed Aug 12 23:57:31.290705 kernel: 8regs : 15957 MB/sec Aug 12 23:57:31.290740 kernel: 32regs : 20723 MB/sec Aug 12 23:57:31.291663 kernel: arm64_neon : 27635 MB/sec Aug 12 23:57:31.291695 kernel: xor: using function: arm64_neon (27635 MB/sec) Aug 12 23:57:31.347659 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Aug 12 23:57:31.358328 systemd[1]: Finished dracut-pre-udev.service. Aug 12 23:57:31.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:31.360000 audit: BPF prog-id=7 op=LOAD Aug 12 23:57:31.361900 kernel: audit: type=1130 audit(1755043051.358:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:31.361936 kernel: audit: type=1334 audit(1755043051.360:9): prog-id=7 op=LOAD Aug 12 23:57:31.361946 kernel: audit: type=1334 audit(1755043051.361:10): prog-id=8 op=LOAD Aug 12 23:57:31.361000 audit: BPF prog-id=8 op=LOAD Aug 12 23:57:31.362372 systemd[1]: Starting systemd-udevd.service... Aug 12 23:57:31.378219 systemd-udevd[492]: Using default interface naming scheme 'v252'. Aug 12 23:57:31.381585 systemd[1]: Started systemd-udevd.service. Aug 12 23:57:31.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:31.383062 systemd[1]: Starting dracut-pre-trigger.service... Aug 12 23:57:31.395172 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Aug 12 23:57:31.424224 systemd[1]: Finished dracut-pre-trigger.service. Aug 12 23:57:31.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:31.425837 systemd[1]: Starting systemd-udev-trigger.service... Aug 12 23:57:31.461151 systemd[1]: Finished systemd-udev-trigger.service. Aug 12 23:57:31.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:31.494580 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:57:31.499197 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:57:31.499212 kernel: GPT:9289727 != 19775487 Aug 12 23:57:31.499222 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:57:31.499231 kernel: GPT:9289727 != 19775487 Aug 12 23:57:31.499239 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:57:31.499247 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:57:31.514913 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 12 23:57:31.517244 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (547) Aug 12 23:57:31.522727 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 12 23:57:31.527163 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 12 23:57:31.527917 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 12 23:57:31.531696 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 12 23:57:31.533189 systemd[1]: Starting disk-uuid.service... Aug 12 23:57:31.540139 disk-uuid[564]: Primary Header is updated. Aug 12 23:57:31.540139 disk-uuid[564]: Secondary Entries is updated. Aug 12 23:57:31.540139 disk-uuid[564]: Secondary Header is updated. Aug 12 23:57:31.543652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:57:32.561650 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:57:32.561748 disk-uuid[565]: The operation has completed successfully. Aug 12 23:57:32.596460 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:57:32.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.596575 systemd[1]: Finished disk-uuid.service. Aug 12 23:57:32.601036 systemd[1]: Starting verity-setup.service... Aug 12 23:57:32.621678 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 12 23:57:32.663476 systemd[1]: Found device dev-mapper-usr.device. Aug 12 23:57:32.666471 systemd[1]: Mounting sysusr-usr.mount... Aug 12 23:57:32.668002 systemd[1]: Finished verity-setup.service. Aug 12 23:57:32.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.737646 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 12 23:57:32.738245 systemd[1]: Mounted sysusr-usr.mount. Aug 12 23:57:32.739192 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 12 23:57:32.740030 systemd[1]: Starting ignition-setup.service... Aug 12 23:57:32.742068 systemd[1]: Starting parse-ip-for-networkd.service... Aug 12 23:57:32.754023 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:57:32.754093 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:57:32.754104 kernel: BTRFS info (device vda6): has skinny extents Aug 12 23:57:32.764566 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 12 23:57:32.771983 systemd[1]: Finished ignition-setup.service. Aug 12 23:57:32.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.773817 systemd[1]: Starting ignition-fetch-offline.service... Aug 12 23:57:32.853026 systemd[1]: Finished parse-ip-for-networkd.service. Aug 12 23:57:32.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.856000 audit: BPF prog-id=9 op=LOAD Aug 12 23:57:32.858249 systemd[1]: Starting systemd-networkd.service... Aug 12 23:57:32.896613 systemd-networkd[741]: lo: Link UP Aug 12 23:57:32.896642 systemd-networkd[741]: lo: Gained carrier Aug 12 23:57:32.897076 systemd-networkd[741]: Enumeration completed Aug 12 23:57:32.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.897274 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:57:32.898857 ignition[655]: Ignition 2.14.0 Aug 12 23:57:32.898360 systemd[1]: Started systemd-networkd.service. Aug 12 23:57:32.898865 ignition[655]: Stage: fetch-offline Aug 12 23:57:32.898541 systemd-networkd[741]: eth0: Link UP Aug 12 23:57:32.898903 ignition[655]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:32.898544 systemd-networkd[741]: eth0: Gained carrier Aug 12 23:57:32.898914 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:57:32.899432 systemd[1]: Reached target network.target. Aug 12 23:57:32.899049 ignition[655]: parsed url from cmdline: "" Aug 12 23:57:32.902406 systemd[1]: Starting iscsiuio.service... Aug 12 23:57:32.899052 ignition[655]: no config URL provided Aug 12 23:57:32.899057 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:57:32.899064 ignition[655]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:57:32.899084 ignition[655]: op(1): [started] loading QEMU firmware config module Aug 12 23:57:32.899088 ignition[655]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:57:32.911209 ignition[655]: op(1): [finished] loading QEMU firmware config module Aug 12 23:57:32.911234 ignition[655]: QEMU firmware config was not found. Ignoring... Aug 12 23:57:32.918533 systemd[1]: Started iscsiuio.service. Aug 12 23:57:32.920105 systemd[1]: Starting iscsid.service... Aug 12 23:57:32.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.920719 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:57:32.924829 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 12 23:57:32.924829 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 12 23:57:32.924829 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 12 23:57:32.924829 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 12 23:57:32.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.935000 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 12 23:57:32.935000 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 12 23:57:32.928300 systemd[1]: Started iscsid.service. Aug 12 23:57:32.933832 systemd[1]: Starting dracut-initqueue.service... Aug 12 23:57:32.944571 systemd[1]: Finished dracut-initqueue.service. Aug 12 23:57:32.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.945465 systemd[1]: Reached target remote-fs-pre.target. Aug 12 23:57:32.946791 systemd[1]: Reached target remote-cryptsetup.target. Aug 12 23:57:32.948153 systemd[1]: Reached target remote-fs.target. Aug 12 23:57:32.950437 systemd[1]: Starting dracut-pre-mount.service... Aug 12 23:57:32.959490 systemd[1]: Finished dracut-pre-mount.service. Aug 12 23:57:32.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.981504 ignition[655]: parsing config with SHA512: f14ce0d0d728830d2b76deec616f5335a7aed2241499eae267d2b8126b57b1c805bba208e363b84639a8ddb7fcafd89a2617a909ea2b330c81dbddd554d84577 Aug 12 23:57:32.992078 unknown[655]: fetched base config from "system" Aug 12 23:57:32.992090 unknown[655]: fetched user config from "qemu" Aug 12 23:57:32.992581 ignition[655]: fetch-offline: fetch-offline passed Aug 12 23:57:32.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:32.993591 systemd[1]: Finished ignition-fetch-offline.service. Aug 12 23:57:32.992663 ignition[655]: Ignition finished successfully Aug 12 23:57:32.994519 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:57:32.995452 systemd[1]: Starting ignition-kargs.service... Aug 12 23:57:33.005106 ignition[762]: Ignition 2.14.0 Aug 12 23:57:33.005117 ignition[762]: Stage: kargs Aug 12 23:57:33.005228 ignition[762]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:33.007634 systemd[1]: Finished ignition-kargs.service. Aug 12 23:57:33.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:33.005238 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:57:33.006237 ignition[762]: kargs: kargs passed Aug 12 23:57:33.009929 systemd[1]: Starting ignition-disks.service... Aug 12 23:57:33.006289 ignition[762]: Ignition finished successfully Aug 12 23:57:33.018400 ignition[768]: Ignition 2.14.0 Aug 12 23:57:33.018412 ignition[768]: Stage: disks Aug 12 23:57:33.018542 ignition[768]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:33.020709 systemd[1]: Finished ignition-disks.service. Aug 12 23:57:33.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:33.018553 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:57:33.022156 systemd[1]: Reached target initrd-root-device.target. Aug 12 23:57:33.019646 ignition[768]: disks: disks passed Aug 12 23:57:33.023348 systemd[1]: Reached target local-fs-pre.target. Aug 12 23:57:33.019701 ignition[768]: Ignition finished successfully Aug 12 23:57:33.024966 systemd[1]: Reached target local-fs.target. Aug 12 23:57:33.026273 systemd[1]: Reached target sysinit.target. Aug 12 23:57:33.027369 systemd[1]: Reached target basic.target. Aug 12 23:57:33.029662 systemd[1]: Starting systemd-fsck-root.service... Aug 12 23:57:33.046607 systemd-fsck[776]: ROOT: clean, 629/553520 files, 56026/553472 blocks Aug 12 23:57:33.051815 systemd[1]: Finished systemd-fsck-root.service. Aug 12 23:57:33.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:33.056665 systemd[1]: Mounting sysroot.mount... Aug 12 23:57:33.065570 systemd[1]: Mounted sysroot.mount. Aug 12 23:57:33.066679 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 12 23:57:33.066274 systemd[1]: Reached target initrd-root-fs.target. Aug 12 23:57:33.068474 systemd[1]: Mounting sysroot-usr.mount... Aug 12 23:57:33.069324 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 12 23:57:33.069368 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:57:33.069393 systemd[1]: Reached target ignition-diskful.target. Aug 12 23:57:33.071727 systemd[1]: Mounted sysroot-usr.mount. Aug 12 23:57:33.074078 systemd[1]: Starting initrd-setup-root.service... Aug 12 23:57:33.079167 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:57:33.084572 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:57:33.092126 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:57:33.097876 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:57:33.143830 systemd[1]: Finished initrd-setup-root.service. Aug 12 23:57:33.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:33.145336 systemd[1]: Starting ignition-mount.service... Aug 12 23:57:33.146650 systemd[1]: Starting sysroot-boot.service... Aug 12 23:57:33.153032 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Aug 12 23:57:33.163987 ignition[829]: INFO : Ignition 2.14.0 Aug 12 23:57:33.164966 ignition[829]: INFO : Stage: mount Aug 12 23:57:33.165857 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:33.166715 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:57:33.167966 ignition[829]: INFO : mount: mount passed Aug 12 23:57:33.167966 ignition[829]: INFO : Ignition finished successfully Aug 12 23:57:33.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:33.168798 systemd[1]: Finished ignition-mount.service. Aug 12 23:57:33.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:33.169972 systemd[1]: Finished sysroot-boot.service. Aug 12 23:57:33.678150 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 12 23:57:33.685664 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (837) Aug 12 23:57:33.687765 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:57:33.687796 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:57:33.687806 kernel: BTRFS info (device vda6): has skinny extents Aug 12 23:57:33.694395 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 12 23:57:33.695891 systemd[1]: Starting ignition-files.service... Aug 12 23:57:33.712290 ignition[857]: INFO : Ignition 2.14.0 Aug 12 23:57:33.712290 ignition[857]: INFO : Stage: files Aug 12 23:57:33.714054 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:33.714054 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:57:33.714054 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:57:33.720161 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:57:33.720161 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:57:33.725094 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:57:33.726782 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:57:33.728393 unknown[857]: wrote ssh authorized keys file for user: core Aug 12 23:57:33.729717 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:57:33.729717 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 12 23:57:33.729717 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 12 23:57:33.729717 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:57:33.729717 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 12 23:57:33.847551 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 12 23:57:34.158172 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:57:34.159762 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:57:34.161304 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:57:34.163393 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:57:34.165087 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:57:34.165087 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:57:34.168151 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:57:34.168151 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:57:34.168151 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:57:34.168151 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:57:34.168151 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:57:34.168151 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:57:34.168151 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:57:34.168151 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:57:34.168151 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 12 23:57:34.495179 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 12 23:57:34.730762 systemd-networkd[741]: eth0: Gained IPv6LL Aug 12 23:57:34.927018 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:57:34.927018 ignition[857]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:57:34.931029 ignition[857]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:57:34.980710 ignition[857]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:57:34.987012 ignition[857]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:57:34.987012 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:57:34.987012 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:57:34.987012 ignition[857]: INFO : files: files passed Aug 12 23:57:34.987012 ignition[857]: INFO : Ignition finished successfully Aug 12 23:57:34.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:34.983115 systemd[1]: Finished ignition-files.service. Aug 12 23:57:34.984801 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 12 23:57:34.985609 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 12 23:57:34.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:34.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:34.996719 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 12 23:57:35.002708 kernel: kauditd_printk_skb: 24 callbacks suppressed Aug 12 23:57:35.002730 kernel: audit: type=1130 audit(1755043054.998:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:34.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:34.986283 systemd[1]: Starting ignition-quench.service... Aug 12 23:57:35.003866 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:57:34.991797 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:57:34.991892 systemd[1]: Finished ignition-quench.service. Aug 12 23:57:34.998367 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 12 23:57:34.999283 systemd[1]: Reached target ignition-complete.target. Aug 12 23:57:35.004088 systemd[1]: Starting initrd-parse-etc.service... Aug 12 23:57:35.018379 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:57:35.018502 systemd[1]: Finished initrd-parse-etc.service. Aug 12 23:57:35.024534 kernel: audit: type=1130 audit(1755043055.019:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.024560 kernel: audit: type=1131 audit(1755043055.019:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.020080 systemd[1]: Reached target initrd-fs.target. Aug 12 23:57:35.025124 systemd[1]: Reached target initrd.target. Aug 12 23:57:35.026574 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 12 23:57:35.027446 systemd[1]: Starting dracut-pre-pivot.service... Aug 12 23:57:35.039219 systemd[1]: Finished dracut-pre-pivot.service. Aug 12 23:57:35.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.040741 systemd[1]: Starting initrd-cleanup.service... Aug 12 23:57:35.043875 kernel: audit: type=1130 audit(1755043055.039:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.048922 systemd[1]: Stopped target nss-lookup.target. Aug 12 23:57:35.055242 kernel: audit: type=1131 audit(1755043055.051:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.049660 systemd[1]: Stopped target remote-cryptsetup.target. Aug 12 23:57:35.050924 systemd[1]: Stopped target timers.target. Aug 12 23:57:35.051576 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:57:35.051706 systemd[1]: Stopped dracut-pre-pivot.service. Aug 12 23:57:35.052440 systemd[1]: Stopped target initrd.target. Aug 12 23:57:35.054830 systemd[1]: Stopped target basic.target. Aug 12 23:57:35.055758 systemd[1]: Stopped target ignition-complete.target. Aug 12 23:57:35.056870 systemd[1]: Stopped target ignition-diskful.target. Aug 12 23:57:35.057945 systemd[1]: Stopped target initrd-root-device.target. Aug 12 23:57:35.059178 systemd[1]: Stopped target remote-fs.target. Aug 12 23:57:35.060244 systemd[1]: Stopped target remote-fs-pre.target. Aug 12 23:57:35.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.061291 systemd[1]: Stopped target sysinit.target. Aug 12 23:57:35.070227 kernel: audit: type=1131 audit(1755043055.066:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.062310 systemd[1]: Stopped target local-fs.target. Aug 12 23:57:35.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.063664 systemd[1]: Stopped target local-fs-pre.target. Aug 12 23:57:35.075973 kernel: audit: type=1131 audit(1755043055.070:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.076005 kernel: audit: type=1131 audit(1755043055.073:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.064839 systemd[1]: Stopped target swap.target. Aug 12 23:57:35.065877 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:57:35.065996 systemd[1]: Stopped dracut-pre-mount.service. Aug 12 23:57:35.066968 systemd[1]: Stopped target cryptsetup.target. Aug 12 23:57:35.069877 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:57:35.069987 systemd[1]: Stopped dracut-initqueue.service. Aug 12 23:57:35.070869 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:57:35.070955 systemd[1]: Stopped ignition-fetch-offline.service. Aug 12 23:57:35.073800 systemd[1]: Stopped target paths.target. Aug 12 23:57:35.076543 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:57:35.080709 systemd[1]: Stopped systemd-ask-password-console.path. Aug 12 23:57:35.081442 systemd[1]: Stopped target slices.target. Aug 12 23:57:35.089798 kernel: audit: type=1131 audit(1755043055.087:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.082564 systemd[1]: Stopped target sockets.target. Aug 12 23:57:35.092644 kernel: audit: type=1131 audit(1755043055.089:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.083677 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:57:35.083747 systemd[1]: Closed iscsid.socket. Aug 12 23:57:35.084830 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:57:35.084890 systemd[1]: Closed iscsiuio.socket. Aug 12 23:57:35.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.085990 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:57:35.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.086087 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 12 23:57:35.087239 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:57:35.087325 systemd[1]: Stopped ignition-files.service. Aug 12 23:57:35.100924 ignition[898]: INFO : Ignition 2.14.0 Aug 12 23:57:35.100924 ignition[898]: INFO : Stage: umount Aug 12 23:57:35.100924 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:57:35.100924 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:57:35.100924 ignition[898]: INFO : umount: umount passed Aug 12 23:57:35.100924 ignition[898]: INFO : Ignition finished successfully Aug 12 23:57:35.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.091309 systemd[1]: Stopping ignition-mount.service... Aug 12 23:57:35.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.094100 systemd[1]: Stopping sysroot-boot.service... Aug 12 23:57:35.094840 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:57:35.095039 systemd[1]: Stopped systemd-udev-trigger.service. Aug 12 23:57:35.096581 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:57:35.096728 systemd[1]: Stopped dracut-pre-trigger.service. Aug 12 23:57:35.101740 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:57:35.101830 systemd[1]: Finished initrd-cleanup.service. Aug 12 23:57:35.103259 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:57:35.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.103336 systemd[1]: Stopped ignition-mount.service. Aug 12 23:57:35.105197 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:57:35.105508 systemd[1]: Stopped target network.target. Aug 12 23:57:35.106148 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:57:35.106198 systemd[1]: Stopped ignition-disks.service. Aug 12 23:57:35.106978 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:57:35.107017 systemd[1]: Stopped ignition-kargs.service. Aug 12 23:57:35.108131 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:57:35.108168 systemd[1]: Stopped ignition-setup.service. Aug 12 23:57:35.127000 audit: BPF prog-id=6 op=UNLOAD Aug 12 23:57:35.110170 systemd[1]: Stopping systemd-networkd.service... Aug 12 23:57:35.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.111282 systemd[1]: Stopping systemd-resolved.service... Aug 12 23:57:35.116267 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:57:35.116361 systemd[1]: Stopped systemd-resolved.service. Aug 12 23:57:35.125718 systemd-networkd[741]: eth0: DHCPv6 lease lost Aug 12 23:57:35.133000 audit: BPF prog-id=9 op=UNLOAD Aug 12 23:57:35.127393 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:57:35.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.127553 systemd[1]: Stopped systemd-networkd.service. Aug 12 23:57:35.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.130408 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:57:35.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.130443 systemd[1]: Closed systemd-networkd.socket. Aug 12 23:57:35.132263 systemd[1]: Stopping network-cleanup.service... Aug 12 23:57:35.135008 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:57:35.135081 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 12 23:57:35.136402 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:57:35.136448 systemd[1]: Stopped systemd-sysctl.service. Aug 12 23:57:35.138487 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:57:35.138547 systemd[1]: Stopped systemd-modules-load.service. Aug 12 23:57:35.139728 systemd[1]: Stopping systemd-udevd.service... Aug 12 23:57:35.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.144671 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:57:35.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.149248 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:57:35.149378 systemd[1]: Stopped systemd-udevd.service. Aug 12 23:57:35.150705 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:57:35.150797 systemd[1]: Stopped network-cleanup.service. Aug 12 23:57:35.151805 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:57:35.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.151840 systemd[1]: Closed systemd-udevd-control.socket. Aug 12 23:57:35.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.153164 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:57:35.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.153197 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 12 23:57:35.154345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:57:35.154392 systemd[1]: Stopped dracut-pre-udev.service. Aug 12 23:57:35.156950 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:57:35.156998 systemd[1]: Stopped dracut-cmdline.service. Aug 12 23:57:35.158114 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:57:35.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.158149 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 12 23:57:35.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.160386 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 12 23:57:35.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.161510 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:57:35.161570 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 12 23:57:35.165144 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:57:35.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.165190 systemd[1]: Stopped kmod-static-nodes.service. Aug 12 23:57:35.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.166392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:57:35.166428 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 12 23:57:35.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:35.168249 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 12 23:57:35.168808 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:57:35.168899 systemd[1]: Stopped sysroot-boot.service. Aug 12 23:57:35.170192 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:57:35.170268 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 12 23:57:35.171407 systemd[1]: Reached target initrd-switch-root.target. Aug 12 23:57:35.172400 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:57:35.172448 systemd[1]: Stopped initrd-setup-root.service. Aug 12 23:57:35.174377 systemd[1]: Starting initrd-switch-root.service... Aug 12 23:57:35.180526 systemd[1]: Switching root. Aug 12 23:57:35.183000 audit: BPF prog-id=8 op=UNLOAD Aug 12 23:57:35.183000 audit: BPF prog-id=7 op=UNLOAD Aug 12 23:57:35.183000 audit: BPF prog-id=5 op=UNLOAD Aug 12 23:57:35.183000 audit: BPF prog-id=4 op=UNLOAD Aug 12 23:57:35.183000 audit: BPF prog-id=3 op=UNLOAD Aug 12 23:57:35.200056 iscsid[748]: iscsid shutting down. Aug 12 23:57:35.200612 systemd-journald[290]: Journal stopped Aug 12 23:57:37.333659 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Aug 12 23:57:37.333718 kernel: SELinux: Class mctp_socket not defined in policy. Aug 12 23:57:37.333736 kernel: SELinux: Class anon_inode not defined in policy. Aug 12 23:57:37.333746 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 12 23:57:37.333758 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:57:37.333767 kernel: SELinux: policy capability open_perms=1 Aug 12 23:57:37.333777 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:57:37.333787 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:57:37.333799 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:57:37.333817 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:57:37.333852 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:57:37.333863 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:57:37.333874 systemd[1]: Successfully loaded SELinux policy in 43.404ms. Aug 12 23:57:37.333891 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.607ms. Aug 12 23:57:37.333903 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 12 23:57:37.333914 systemd[1]: Detected virtualization kvm. Aug 12 23:57:37.333924 systemd[1]: Detected architecture arm64. Aug 12 23:57:37.333934 systemd[1]: Detected first boot. Aug 12 23:57:37.333946 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:57:37.333957 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 12 23:57:37.333967 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:57:37.333978 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 12 23:57:37.333991 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 12 23:57:37.334004 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:57:37.334015 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:57:37.334028 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 12 23:57:37.334039 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 12 23:57:37.334050 systemd[1]: Created slice system-addon\x2drun.slice. Aug 12 23:57:37.334060 systemd[1]: Created slice system-getty.slice. Aug 12 23:57:37.334071 systemd[1]: Created slice system-modprobe.slice. Aug 12 23:57:37.334081 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 12 23:57:37.334093 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 12 23:57:37.334103 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 12 23:57:37.334114 systemd[1]: Created slice user.slice. Aug 12 23:57:37.334126 systemd[1]: Started systemd-ask-password-console.path. Aug 12 23:57:37.334137 systemd[1]: Started systemd-ask-password-wall.path. Aug 12 23:57:37.334147 systemd[1]: Set up automount boot.automount. Aug 12 23:57:37.334157 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 12 23:57:37.334167 systemd[1]: Reached target integritysetup.target. Aug 12 23:57:37.334178 systemd[1]: Reached target remote-cryptsetup.target. Aug 12 23:57:37.334190 systemd[1]: Reached target remote-fs.target. Aug 12 23:57:37.334200 systemd[1]: Reached target slices.target. Aug 12 23:57:37.334211 systemd[1]: Reached target swap.target. Aug 12 23:57:37.334221 systemd[1]: Reached target torcx.target. Aug 12 23:57:37.334231 systemd[1]: Reached target veritysetup.target. Aug 12 23:57:37.334243 systemd[1]: Listening on systemd-coredump.socket. Aug 12 23:57:37.334254 systemd[1]: Listening on systemd-initctl.socket. Aug 12 23:57:37.334264 systemd[1]: Listening on systemd-journald-audit.socket. Aug 12 23:57:37.334276 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 12 23:57:37.334287 systemd[1]: Listening on systemd-journald.socket. Aug 12 23:57:37.334298 systemd[1]: Listening on systemd-networkd.socket. Aug 12 23:57:37.334308 systemd[1]: Listening on systemd-udevd-control.socket. Aug 12 23:57:37.334318 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 12 23:57:37.334330 systemd[1]: Listening on systemd-userdbd.socket. Aug 12 23:57:37.334340 systemd[1]: Mounting dev-hugepages.mount... Aug 12 23:57:37.334351 systemd[1]: Mounting dev-mqueue.mount... Aug 12 23:57:37.334362 systemd[1]: Mounting media.mount... Aug 12 23:57:37.334373 systemd[1]: Mounting sys-kernel-debug.mount... Aug 12 23:57:37.334384 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 12 23:57:37.334394 systemd[1]: Mounting tmp.mount... Aug 12 23:57:37.334404 systemd[1]: Starting flatcar-tmpfiles.service... Aug 12 23:57:37.334414 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:57:37.334426 systemd[1]: Starting kmod-static-nodes.service... Aug 12 23:57:37.334436 systemd[1]: Starting modprobe@configfs.service... Aug 12 23:57:37.334446 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:57:37.334457 systemd[1]: Starting modprobe@drm.service... Aug 12 23:57:37.334467 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:57:37.334479 systemd[1]: Starting modprobe@fuse.service... Aug 12 23:57:37.334498 systemd[1]: Starting modprobe@loop.service... Aug 12 23:57:37.334512 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:57:37.334523 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 12 23:57:37.334533 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 12 23:57:37.334543 systemd[1]: Starting systemd-journald.service... Aug 12 23:57:37.334554 systemd[1]: Starting systemd-modules-load.service... Aug 12 23:57:37.334565 systemd[1]: Starting systemd-network-generator.service... Aug 12 23:57:37.334578 systemd[1]: Starting systemd-remount-fs.service... Aug 12 23:57:37.334589 systemd[1]: Starting systemd-udev-trigger.service... Aug 12 23:57:37.334599 systemd[1]: Mounted dev-hugepages.mount. Aug 12 23:57:37.334609 systemd[1]: Mounted dev-mqueue.mount. Aug 12 23:57:37.334619 systemd[1]: Mounted media.mount. Aug 12 23:57:37.334659 systemd[1]: Mounted sys-kernel-debug.mount. Aug 12 23:57:37.334669 kernel: fuse: init (API version 7.34) Aug 12 23:57:37.334686 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 12 23:57:37.334698 systemd[1]: Mounted tmp.mount. Aug 12 23:57:37.334708 systemd[1]: Finished kmod-static-nodes.service. Aug 12 23:57:37.334719 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:57:37.334729 systemd[1]: Finished modprobe@configfs.service. Aug 12 23:57:37.334741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:57:37.334752 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:57:37.334763 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:57:37.334779 systemd[1]: Finished modprobe@drm.service. Aug 12 23:57:37.334790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:57:37.334800 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:57:37.334810 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:57:37.334824 systemd-journald[1024]: Journal started Aug 12 23:57:37.334868 systemd-journald[1024]: Runtime Journal (/run/log/journal/ea71e6590d194fcfbe1d8a56556f3751) is 6.0M, max 48.7M, 42.6M free. Aug 12 23:57:37.246000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 12 23:57:37.246000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 12 23:57:37.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.332000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 12 23:57:37.332000 audit[1024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd67a7150 a2=4000 a3=1 items=0 ppid=1 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:37.332000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 12 23:57:37.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.336465 systemd[1]: Finished modprobe@fuse.service. Aug 12 23:57:37.336754 systemd[1]: Started systemd-journald.service. Aug 12 23:57:37.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.338740 systemd[1]: Finished systemd-modules-load.service. Aug 12 23:57:37.339727 systemd[1]: Finished systemd-network-generator.service. Aug 12 23:57:37.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.341353 systemd[1]: Finished systemd-remount-fs.service. Aug 12 23:57:37.342238 kernel: loop: module loaded Aug 12 23:57:37.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.342748 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:57:37.343091 systemd[1]: Finished modprobe@loop.service. Aug 12 23:57:37.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.344260 systemd[1]: Reached target network-pre.target. Aug 12 23:57:37.346464 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 12 23:57:37.348455 systemd[1]: Mounting sys-kernel-config.mount... Aug 12 23:57:37.349507 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:57:37.353380 systemd[1]: Starting systemd-hwdb-update.service... Aug 12 23:57:37.355780 systemd[1]: Starting systemd-journal-flush.service... Aug 12 23:57:37.356787 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:57:37.358183 systemd[1]: Starting systemd-random-seed.service... Aug 12 23:57:37.359134 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 12 23:57:37.360836 systemd[1]: Starting systemd-sysctl.service... Aug 12 23:57:37.365367 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 12 23:57:37.366468 systemd[1]: Mounted sys-kernel-config.mount. Aug 12 23:57:37.370264 systemd[1]: Finished systemd-random-seed.service. Aug 12 23:57:37.371086 systemd[1]: Reached target first-boot-complete.target. Aug 12 23:57:37.371237 systemd-journald[1024]: Time spent on flushing to /var/log/journal/ea71e6590d194fcfbe1d8a56556f3751 is 12.192ms for 928 entries. Aug 12 23:57:37.371237 systemd-journald[1024]: System Journal (/var/log/journal/ea71e6590d194fcfbe1d8a56556f3751) is 8.0M, max 195.6M, 187.6M free. Aug 12 23:57:37.397831 systemd-journald[1024]: Received client request to flush runtime journal. Aug 12 23:57:37.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.380890 systemd[1]: Finished systemd-sysctl.service. Aug 12 23:57:37.391918 systemd[1]: Finished systemd-udev-trigger.service. Aug 12 23:57:37.393991 systemd[1]: Starting systemd-udev-settle.service... Aug 12 23:57:37.398868 systemd[1]: Finished systemd-journal-flush.service. Aug 12 23:57:37.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.402209 systemd[1]: Finished flatcar-tmpfiles.service. Aug 12 23:57:37.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.404846 systemd[1]: Starting systemd-sysusers.service... Aug 12 23:57:37.407987 udevadm[1082]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 12 23:57:37.424619 systemd[1]: Finished systemd-sysusers.service. Aug 12 23:57:37.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.426796 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 12 23:57:37.451749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 12 23:57:37.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.776700 systemd[1]: Finished systemd-hwdb-update.service. Aug 12 23:57:37.778645 systemd[1]: Starting systemd-udevd.service... Aug 12 23:57:37.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.795961 systemd-udevd[1092]: Using default interface naming scheme 'v252'. Aug 12 23:57:37.810289 systemd[1]: Started systemd-udevd.service. Aug 12 23:57:37.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.812509 systemd[1]: Starting systemd-networkd.service... Aug 12 23:57:37.835545 systemd[1]: Found device dev-ttyAMA0.device. Aug 12 23:57:37.843199 systemd[1]: Starting systemd-userdbd.service... Aug 12 23:57:37.874105 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 12 23:57:37.899723 systemd[1]: Started systemd-userdbd.service. Aug 12 23:57:37.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.910109 systemd[1]: Finished systemd-udev-settle.service. Aug 12 23:57:37.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.912001 systemd[1]: Starting lvm2-activation-early.service... Aug 12 23:57:37.925198 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:57:37.949858 systemd[1]: Finished lvm2-activation-early.service. Aug 12 23:57:37.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.950788 systemd[1]: Reached target cryptsetup.target. Aug 12 23:57:37.950961 systemd-networkd[1099]: lo: Link UP Aug 12 23:57:37.950971 systemd-networkd[1099]: lo: Gained carrier Aug 12 23:57:37.952638 systemd[1]: Starting lvm2-activation.service... Aug 12 23:57:37.953189 systemd-networkd[1099]: Enumeration completed Aug 12 23:57:37.953294 systemd-networkd[1099]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:57:37.953389 systemd[1]: Started systemd-networkd.service. Aug 12 23:57:37.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.954646 systemd-networkd[1099]: eth0: Link UP Aug 12 23:57:37.954654 systemd-networkd[1099]: eth0: Gained carrier Aug 12 23:57:37.956667 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:57:37.970778 systemd-networkd[1099]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:57:37.982715 systemd[1]: Finished lvm2-activation.service. Aug 12 23:57:37.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:37.983504 systemd[1]: Reached target local-fs-pre.target. Aug 12 23:57:37.984133 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:57:37.984161 systemd[1]: Reached target local-fs.target. Aug 12 23:57:37.984729 systemd[1]: Reached target machines.target. Aug 12 23:57:37.986523 systemd[1]: Starting ldconfig.service... Aug 12 23:57:37.988223 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:57:37.988283 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:57:37.989499 systemd[1]: Starting systemd-boot-update.service... Aug 12 23:57:37.991157 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 12 23:57:37.993056 systemd[1]: Starting systemd-machine-id-commit.service... Aug 12 23:57:37.994770 systemd[1]: Starting systemd-sysext.service... Aug 12 23:57:38.004012 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) Aug 12 23:57:38.005802 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 12 23:57:38.007131 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 12 23:57:38.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.010726 systemd[1]: Unmounting usr-share-oem.mount... Aug 12 23:57:38.014241 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 12 23:57:38.014523 systemd[1]: Unmounted usr-share-oem.mount. Aug 12 23:57:38.085644 kernel: loop0: detected capacity change from 0 to 203944 Aug 12 23:57:38.090545 systemd[1]: Finished systemd-machine-id-commit.service. Aug 12 23:57:38.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.102662 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:57:38.110433 systemd-fsck[1143]: fsck.fat 4.2 (2021-01-31) Aug 12 23:57:38.110433 systemd-fsck[1143]: /dev/vda1: 236 files, 117307/258078 clusters Aug 12 23:57:38.112772 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 12 23:57:38.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.177654 kernel: loop1: detected capacity change from 0 to 203944 Aug 12 23:57:38.184856 (sd-sysext)[1149]: Using extensions 'kubernetes'. Aug 12 23:57:38.185354 (sd-sysext)[1149]: Merged extensions into '/usr'. Aug 12 23:57:38.204088 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.205592 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:57:38.207712 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:57:38.212565 systemd[1]: Starting modprobe@loop.service... Aug 12 23:57:38.213652 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.213824 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:57:38.214792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:57:38.214973 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:57:38.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.216399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:57:38.216574 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:57:38.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.217993 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:57:38.218159 systemd[1]: Finished modprobe@loop.service. Aug 12 23:57:38.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.219458 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:57:38.219579 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.264089 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:57:38.267904 systemd[1]: Finished ldconfig.service. Aug 12 23:57:38.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.313320 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:57:38.315200 systemd[1]: Mounting boot.mount... Aug 12 23:57:38.316966 systemd[1]: Mounting usr-share-oem.mount... Aug 12 23:57:38.323571 systemd[1]: Mounted boot.mount. Aug 12 23:57:38.324464 systemd[1]: Mounted usr-share-oem.mount. Aug 12 23:57:38.326330 systemd[1]: Finished systemd-sysext.service. Aug 12 23:57:38.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.328467 systemd[1]: Starting ensure-sysext.service... Aug 12 23:57:38.330320 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 12 23:57:38.331492 systemd[1]: Finished systemd-boot-update.service. Aug 12 23:57:38.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.336262 systemd[1]: Reloading. Aug 12 23:57:38.341328 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 12 23:57:38.342570 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:57:38.343983 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:57:38.376222 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-08-12T23:57:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 12 23:57:38.376252 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-08-12T23:57:38Z" level=info msg="torcx already run" Aug 12 23:57:38.449650 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 12 23:57:38.449941 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 12 23:57:38.467748 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:57:38.518260 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 12 23:57:38.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.522430 systemd[1]: Starting audit-rules.service... Aug 12 23:57:38.524934 systemd[1]: Starting clean-ca-certificates.service... Aug 12 23:57:38.527823 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 12 23:57:38.531298 systemd[1]: Starting systemd-resolved.service... Aug 12 23:57:38.534674 systemd[1]: Starting systemd-timesyncd.service... Aug 12 23:57:38.537400 systemd[1]: Starting systemd-update-utmp.service... Aug 12 23:57:38.539060 systemd[1]: Finished clean-ca-certificates.service. Aug 12 23:57:38.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.542757 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:57:38.546577 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.548186 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:57:38.550407 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:57:38.552668 systemd[1]: Starting modprobe@loop.service... Aug 12 23:57:38.553330 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.553509 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:57:38.553760 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:57:38.556123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:57:38.556324 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:57:38.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.556000 audit[1242]: SYSTEM_BOOT pid=1242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.557550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:57:38.557726 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:57:38.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.558809 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:57:38.558967 systemd[1]: Finished modprobe@loop.service. Aug 12 23:57:38.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.562852 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.564408 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:57:38.566785 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:57:38.568883 systemd[1]: Starting modprobe@loop.service... Aug 12 23:57:38.569642 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.569849 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:57:38.570040 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:57:38.571281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:57:38.571461 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:57:38.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.572781 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:57:38.572929 systemd[1]: Finished modprobe@loop.service. Aug 12 23:57:38.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.575313 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:57:38.577164 systemd[1]: Finished systemd-update-utmp.service. Aug 12 23:57:38.578315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:57:38.578497 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:57:38.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.581680 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.583324 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:57:38.586824 systemd[1]: Starting modprobe@drm.service... Aug 12 23:57:38.589996 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:57:38.593445 systemd[1]: Starting modprobe@loop.service... Aug 12 23:57:38.594180 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.594349 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:57:38.595819 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 12 23:57:38.596677 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:57:38.597983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:57:38.598150 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:57:38.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.599400 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 12 23:57:38.600604 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:57:38.600882 systemd[1]: Finished modprobe@drm.service. Aug 12 23:57:38.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.602035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:57:38.602189 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:57:38.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.603496 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:57:38.603685 systemd[1]: Finished modprobe@loop.service. Aug 12 23:57:38.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.604836 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:57:38.604953 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.606451 systemd[1]: Starting systemd-update-done.service... Aug 12 23:57:38.615156 systemd[1]: Finished ensure-sysext.service. Aug 12 23:57:38.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.622927 systemd[1]: Finished systemd-update-done.service. Aug 12 23:57:38.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.624803 systemd[1]: Started systemd-timesyncd.service. Aug 12 23:57:38.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:38.625850 systemd[1]: Reached target time-set.target. Aug 12 23:57:38.626012 systemd-timesyncd[1241]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:57:38.626294 systemd-timesyncd[1241]: Initial clock synchronization to Tue 2025-08-12 23:57:38.845918 UTC. Aug 12 23:57:38.646477 systemd-resolved[1240]: Positive Trust Anchors: Aug 12 23:57:38.646497 systemd-resolved[1240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:57:38.646524 systemd-resolved[1240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 12 23:57:38.651000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 12 23:57:38.651000 audit[1286]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc0572b60 a2=420 a3=0 items=0 ppid=1235 pid=1286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:38.651000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 12 23:57:38.652275 augenrules[1286]: No rules Aug 12 23:57:38.652969 systemd[1]: Finished audit-rules.service. Aug 12 23:57:38.667222 systemd-resolved[1240]: Defaulting to hostname 'linux'. Aug 12 23:57:38.668758 systemd[1]: Started systemd-resolved.service. Aug 12 23:57:38.669464 systemd[1]: Reached target network.target. Aug 12 23:57:38.670078 systemd[1]: Reached target nss-lookup.target. Aug 12 23:57:38.670654 systemd[1]: Reached target sysinit.target. Aug 12 23:57:38.671273 systemd[1]: Started motdgen.path. Aug 12 23:57:38.671866 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 12 23:57:38.672792 systemd[1]: Started logrotate.timer. Aug 12 23:57:38.673426 systemd[1]: Started mdadm.timer. Aug 12 23:57:38.673948 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 12 23:57:38.674583 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:57:38.674614 systemd[1]: Reached target paths.target. Aug 12 23:57:38.675150 systemd[1]: Reached target timers.target. Aug 12 23:57:38.676065 systemd[1]: Listening on dbus.socket. Aug 12 23:57:38.677895 systemd[1]: Starting docker.socket... Aug 12 23:57:38.679693 systemd[1]: Listening on sshd.socket. Aug 12 23:57:38.680337 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:57:38.680760 systemd[1]: Listening on docker.socket. Aug 12 23:57:38.681359 systemd[1]: Reached target sockets.target. Aug 12 23:57:38.682050 systemd[1]: Reached target basic.target. Aug 12 23:57:38.682795 systemd[1]: System is tainted: cgroupsv1 Aug 12 23:57:38.682842 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.682865 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 12 23:57:38.684058 systemd[1]: Starting containerd.service... Aug 12 23:57:38.685831 systemd[1]: Starting dbus.service... Aug 12 23:57:38.687538 systemd[1]: Starting enable-oem-cloudinit.service... Aug 12 23:57:38.689667 systemd[1]: Starting extend-filesystems.service... Aug 12 23:57:38.690558 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 12 23:57:38.692303 systemd[1]: Starting motdgen.service... Aug 12 23:57:38.695015 systemd[1]: Starting prepare-helm.service... Aug 12 23:57:38.696463 jq[1297]: false Aug 12 23:57:38.700088 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 12 23:57:38.702433 systemd[1]: Starting sshd-keygen.service... Aug 12 23:57:38.705271 systemd[1]: Starting systemd-logind.service... Aug 12 23:57:38.706135 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:57:38.706219 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:57:38.707673 systemd[1]: Starting update-engine.service... Aug 12 23:57:38.709513 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 12 23:57:38.712187 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:57:38.712451 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 12 23:57:38.713657 jq[1316]: true Aug 12 23:57:38.713815 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:57:38.714188 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 12 23:57:38.725225 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:57:38.725472 systemd[1]: Finished motdgen.service. Aug 12 23:57:38.726654 extend-filesystems[1298]: Found loop1 Aug 12 23:57:38.726654 extend-filesystems[1298]: Found vda Aug 12 23:57:38.726654 extend-filesystems[1298]: Found vda1 Aug 12 23:57:38.726654 extend-filesystems[1298]: Found vda2 Aug 12 23:57:38.726654 extend-filesystems[1298]: Found vda3 Aug 12 23:57:38.726654 extend-filesystems[1298]: Found usr Aug 12 23:57:38.726654 extend-filesystems[1298]: Found vda4 Aug 12 23:57:38.726654 extend-filesystems[1298]: Found vda6 Aug 12 23:57:38.726654 extend-filesystems[1298]: Found vda7 Aug 12 23:57:38.726654 extend-filesystems[1298]: Found vda9 Aug 12 23:57:38.726654 extend-filesystems[1298]: Checking size of /dev/vda9 Aug 12 23:57:38.745804 jq[1322]: true Aug 12 23:57:38.747478 tar[1319]: linux-arm64/helm Aug 12 23:57:38.753868 dbus-daemon[1296]: [system] SELinux support is enabled Aug 12 23:57:38.754271 systemd[1]: Started dbus.service. Aug 12 23:57:38.756974 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:57:38.757006 systemd[1]: Reached target system-config.target. Aug 12 23:57:38.757734 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:57:38.757749 systemd[1]: Reached target user-config.target. Aug 12 23:57:38.795614 extend-filesystems[1298]: Resized partition /dev/vda9 Aug 12 23:57:38.803091 systemd-logind[1309]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:57:38.806972 extend-filesystems[1355]: resize2fs 1.46.5 (30-Dec-2021) Aug 12 23:57:38.803301 systemd-logind[1309]: New seat seat0. Aug 12 23:57:38.808539 systemd[1]: Started systemd-logind.service. Aug 12 23:57:38.809557 bash[1349]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:57:38.810102 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 12 23:57:38.837660 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:57:38.842144 update_engine[1312]: I0812 23:57:38.841797 1312 main.cc:92] Flatcar Update Engine starting Aug 12 23:57:38.844925 systemd[1]: Started update-engine.service. Aug 12 23:57:38.845131 update_engine[1312]: I0812 23:57:38.844969 1312 update_check_scheduler.cc:74] Next update check in 11m15s Aug 12 23:57:38.847632 systemd[1]: Started locksmithd.service. Aug 12 23:57:38.863938 env[1323]: time="2025-08-12T23:57:38.862708680Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 12 23:57:38.877651 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:57:38.888942 env[1323]: time="2025-08-12T23:57:38.888888840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:57:38.915579 env[1323]: time="2025-08-12T23:57:38.915306720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:38.916139 extend-filesystems[1355]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:57:38.916139 extend-filesystems[1355]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:57:38.916139 extend-filesystems[1355]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:57:38.920526 extend-filesystems[1298]: Resized filesystem in /dev/vda9 Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.917290440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.917323640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.917641920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.917663600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.917677560Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.917689040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.917768280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.918049360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.918204040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:57:38.921553 env[1323]: time="2025-08-12T23:57:38.918219520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:57:38.917235 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:57:38.921969 env[1323]: time="2025-08-12T23:57:38.918272560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 12 23:57:38.921969 env[1323]: time="2025-08-12T23:57:38.921295320Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:57:38.917513 systemd[1]: Finished extend-filesystems.service. Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943140720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943195400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943218480Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943277320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943293960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943311640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943332800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943752200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943775280Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943800560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943820760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.943834400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.944010360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:57:38.944261 env[1323]: time="2025-08-12T23:57:38.944106560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:57:38.945247 env[1323]: time="2025-08-12T23:57:38.945117880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:57:38.945316 env[1323]: time="2025-08-12T23:57:38.945263240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945316 env[1323]: time="2025-08-12T23:57:38.945289200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:57:38.945459 env[1323]: time="2025-08-12T23:57:38.945428840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945459 env[1323]: time="2025-08-12T23:57:38.945453160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945545 env[1323]: time="2025-08-12T23:57:38.945475720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945545 env[1323]: time="2025-08-12T23:57:38.945502560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945545 env[1323]: time="2025-08-12T23:57:38.945519840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945545 env[1323]: time="2025-08-12T23:57:38.945535880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945621 env[1323]: time="2025-08-12T23:57:38.945551240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945621 env[1323]: time="2025-08-12T23:57:38.945566560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945621 env[1323]: time="2025-08-12T23:57:38.945584840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:57:38.945784 env[1323]: time="2025-08-12T23:57:38.945760560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945814 env[1323]: time="2025-08-12T23:57:38.945789040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945814 env[1323]: time="2025-08-12T23:57:38.945807040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.945852 env[1323]: time="2025-08-12T23:57:38.945822840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:57:38.945852 env[1323]: time="2025-08-12T23:57:38.945842600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 12 23:57:38.945911 env[1323]: time="2025-08-12T23:57:38.945858240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:57:38.945911 env[1323]: time="2025-08-12T23:57:38.945880760Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 12 23:57:38.945949 env[1323]: time="2025-08-12T23:57:38.945921040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:57:38.946265 env[1323]: time="2025-08-12T23:57:38.946214400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.946283040Z" level=info msg="Connect containerd service" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.946325160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.948294120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.948522160Z" level=info msg="Start subscribing containerd event" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.948587400Z" level=info msg="Start recovering state" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.948684280Z" level=info msg="Start event monitor" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.948707760Z" level=info msg="Start snapshots syncer" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.948719600Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.948734440Z" level=info msg="Start streaming server" Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.949077680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.949120440Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:57:38.950677 env[1323]: time="2025-08-12T23:57:38.950574120Z" level=info msg="containerd successfully booted in 0.088853s" Aug 12 23:57:38.949337 systemd[1]: Started containerd.service. Aug 12 23:57:38.970708 locksmithd[1357]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:57:39.179715 tar[1319]: linux-arm64/LICENSE Aug 12 23:57:39.179825 tar[1319]: linux-arm64/README.md Aug 12 23:57:39.184088 systemd[1]: Finished prepare-helm.service. Aug 12 23:57:39.403827 systemd-networkd[1099]: eth0: Gained IPv6LL Aug 12 23:57:39.405631 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 12 23:57:39.406793 systemd[1]: Reached target network-online.target. Aug 12 23:57:39.409700 systemd[1]: Starting kubelet.service... Aug 12 23:57:39.972718 sshd_keygen[1329]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:57:39.993967 systemd[1]: Finished sshd-keygen.service. Aug 12 23:57:39.996373 systemd[1]: Starting issuegen.service... Aug 12 23:57:40.001695 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:57:40.001961 systemd[1]: Finished issuegen.service. Aug 12 23:57:40.004528 systemd[1]: Starting systemd-user-sessions.service... Aug 12 23:57:40.014034 systemd[1]: Finished systemd-user-sessions.service. Aug 12 23:57:40.016736 systemd[1]: Started getty@tty1.service. Aug 12 23:57:40.019118 systemd[1]: Started serial-getty@ttyAMA0.service. Aug 12 23:57:40.020206 systemd[1]: Reached target getty.target. Aug 12 23:57:40.083957 systemd[1]: Started kubelet.service. Aug 12 23:57:40.085050 systemd[1]: Reached target multi-user.target. Aug 12 23:57:40.087259 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 12 23:57:40.095113 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 12 23:57:40.095371 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 12 23:57:40.096610 systemd[1]: Startup finished in 5.309s (kernel) + 4.823s (userspace) = 10.132s. Aug 12 23:57:40.600305 kubelet[1397]: E0812 23:57:40.600247 1397 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:57:40.602367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:57:40.602544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:57:43.255033 systemd[1]: Created slice system-sshd.slice. Aug 12 23:57:43.256347 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:58958.service. Aug 12 23:57:43.315202 sshd[1407]: Accepted publickey for core from 10.0.0.1 port 58958 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:57:43.318500 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:57:43.335288 systemd-logind[1309]: New session 1 of user core. Aug 12 23:57:43.336853 systemd[1]: Created slice user-500.slice. Aug 12 23:57:43.337982 systemd[1]: Starting user-runtime-dir@500.service... Aug 12 23:57:43.348920 systemd[1]: Finished user-runtime-dir@500.service. Aug 12 23:57:43.350515 systemd[1]: Starting user@500.service... Aug 12 23:57:43.354738 (systemd)[1412]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:57:43.433445 systemd[1412]: Queued start job for default target default.target. Aug 12 23:57:43.433739 systemd[1412]: Reached target paths.target. Aug 12 23:57:43.433755 systemd[1412]: Reached target sockets.target. Aug 12 23:57:43.433766 systemd[1412]: Reached target timers.target. Aug 12 23:57:43.433776 systemd[1412]: Reached target basic.target. Aug 12 23:57:43.433826 systemd[1412]: Reached target default.target. Aug 12 23:57:43.433849 systemd[1412]: Startup finished in 71ms. Aug 12 23:57:43.434458 systemd[1]: Started user@500.service. Aug 12 23:57:43.435527 systemd[1]: Started session-1.scope. Aug 12 23:57:43.487739 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:58966.service. Aug 12 23:57:43.534410 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 58966 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:57:43.535999 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:57:43.541571 systemd[1]: Started session-2.scope. Aug 12 23:57:43.541803 systemd-logind[1309]: New session 2 of user core. Aug 12 23:57:43.601850 sshd[1421]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:43.603722 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:58978.service. Aug 12 23:57:43.608982 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:58966.service: Deactivated successfully. Aug 12 23:57:43.609044 systemd-logind[1309]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:57:43.609946 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:57:43.610367 systemd-logind[1309]: Removed session 2. Aug 12 23:57:43.644713 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 58978 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:57:43.646267 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:57:43.654610 systemd-logind[1309]: New session 3 of user core. Aug 12 23:57:43.655476 systemd[1]: Started session-3.scope. Aug 12 23:57:43.711458 sshd[1426]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:43.714103 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:58990.service. Aug 12 23:57:43.716361 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:58978.service: Deactivated successfully. Aug 12 23:57:43.717488 systemd-logind[1309]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:57:43.717498 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:57:43.719164 systemd-logind[1309]: Removed session 3. Aug 12 23:57:43.756096 sshd[1433]: Accepted publickey for core from 10.0.0.1 port 58990 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:57:43.766416 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:57:43.770628 systemd-logind[1309]: New session 4 of user core. Aug 12 23:57:43.771523 systemd[1]: Started session-4.scope. Aug 12 23:57:43.832275 sshd[1433]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:43.834690 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:59006.service. Aug 12 23:57:43.836613 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:58990.service: Deactivated successfully. Aug 12 23:57:43.837866 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:57:43.838346 systemd-logind[1309]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:57:43.839200 systemd-logind[1309]: Removed session 4. Aug 12 23:57:43.877989 sshd[1440]: Accepted publickey for core from 10.0.0.1 port 59006 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:57:43.879527 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:57:43.883864 systemd-logind[1309]: New session 5 of user core. Aug 12 23:57:43.884181 systemd[1]: Started session-5.scope. Aug 12 23:57:43.954061 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:57:43.954382 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 12 23:57:43.969968 dbus-daemon[1296]: avc: received setenforce notice (enforcing=1) Aug 12 23:57:43.972116 sudo[1446]: pam_unix(sudo:session): session closed for user root Aug 12 23:57:43.975151 sshd[1440]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:43.978027 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:59012.service. Aug 12 23:57:43.982373 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:59006.service: Deactivated successfully. Aug 12 23:57:43.983455 systemd-logind[1309]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:57:43.983460 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:57:43.984618 systemd-logind[1309]: Removed session 5. Aug 12 23:57:44.025366 sshd[1448]: Accepted publickey for core from 10.0.0.1 port 59012 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:57:44.026981 sshd[1448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:57:44.031682 systemd[1]: Started session-6.scope. Aug 12 23:57:44.032167 systemd-logind[1309]: New session 6 of user core. Aug 12 23:57:44.088234 sudo[1455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:57:44.089385 sudo[1455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 12 23:57:44.093290 sudo[1455]: pam_unix(sudo:session): session closed for user root Aug 12 23:57:44.098700 sudo[1454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 12 23:57:44.098963 sudo[1454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 12 23:57:44.109398 systemd[1]: Stopping audit-rules.service... Aug 12 23:57:44.110000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 12 23:57:44.111019 auditctl[1458]: No rules Aug 12 23:57:44.111920 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:57:44.112203 systemd[1]: Stopped audit-rules.service. Aug 12 23:57:44.112610 kernel: kauditd_printk_skb: 122 callbacks suppressed Aug 12 23:57:44.112660 kernel: audit: type=1305 audit(1755043064.110:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 12 23:57:44.112705 kernel: audit: type=1300 audit(1755043064.110:156): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd6b30f90 a2=420 a3=0 items=0 ppid=1 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.110000 audit[1458]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd6b30f90 a2=420 a3=0 items=0 ppid=1 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.114279 systemd[1]: Starting audit-rules.service... Aug 12 23:57:44.115399 kernel: audit: type=1327 audit(1755043064.110:156): proctitle=2F7362696E2F617564697463746C002D44 Aug 12 23:57:44.110000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 12 23:57:44.116198 kernel: audit: type=1131 audit(1755043064.111:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.135609 augenrules[1476]: No rules Aug 12 23:57:44.136496 systemd[1]: Finished audit-rules.service. Aug 12 23:57:44.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.136000 audit[1454]: USER_END pid=1454 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.137833 sudo[1454]: pam_unix(sudo:session): session closed for user root Aug 12 23:57:44.142474 kernel: audit: type=1130 audit(1755043064.135:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.142566 kernel: audit: type=1106 audit(1755043064.136:159): pid=1454 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.142607 kernel: audit: type=1104 audit(1755043064.136:160): pid=1454 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.136000 audit[1454]: CRED_DISP pid=1454 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.144269 sshd[1448]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:44.144882 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:59016.service. Aug 12 23:57:44.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.49:22-10.0.0.1:59016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.150489 kernel: audit: type=1130 audit(1755043064.143:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.49:22-10.0.0.1:59016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.150614 kernel: audit: type=1106 audit(1755043064.144:162): pid=1448 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:57:44.150665 kernel: audit: type=1104 audit(1755043064.144:163): pid=1448 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:57:44.144000 audit[1448]: USER_END pid=1448 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:57:44.144000 audit[1448]: CRED_DISP pid=1448 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:57:44.150633 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:59012.service: Deactivated successfully. Aug 12 23:57:44.152060 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:57:44.152369 systemd-logind[1309]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:57:44.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.49:22-10.0.0.1:59012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.156315 systemd-logind[1309]: Removed session 6. Aug 12 23:57:44.188000 audit[1481]: USER_ACCT pid=1481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:57:44.189261 sshd[1481]: Accepted publickey for core from 10.0.0.1 port 59016 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:57:44.189000 audit[1481]: CRED_ACQ pid=1481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:57:44.190000 audit[1481]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeeb16970 a2=3 a3=1 items=0 ppid=1 pid=1481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.190000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:57:44.191016 sshd[1481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:57:44.196475 systemd[1]: Started session-7.scope. Aug 12 23:57:44.196743 systemd-logind[1309]: New session 7 of user core. Aug 12 23:57:44.201000 audit[1481]: USER_START pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:57:44.203000 audit[1486]: CRED_ACQ pid=1486 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:57:44.252000 audit[1487]: USER_ACCT pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.253752 sudo[1487]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:57:44.253000 audit[1487]: CRED_REFR pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.254820 sudo[1487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 12 23:57:44.257000 audit[1487]: USER_START pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.317568 systemd[1]: Starting docker.service... Aug 12 23:57:44.407366 env[1499]: time="2025-08-12T23:57:44.407214201Z" level=info msg="Starting up" Aug 12 23:57:44.409663 env[1499]: time="2025-08-12T23:57:44.409523736Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 12 23:57:44.409848 env[1499]: time="2025-08-12T23:57:44.409816124Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 12 23:57:44.409967 env[1499]: time="2025-08-12T23:57:44.409946254Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 12 23:57:44.410049 env[1499]: time="2025-08-12T23:57:44.410035172Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 12 23:57:44.412879 env[1499]: time="2025-08-12T23:57:44.412845027Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 12 23:57:44.412999 env[1499]: time="2025-08-12T23:57:44.412985542Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 12 23:57:44.413062 env[1499]: time="2025-08-12T23:57:44.413047038Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 12 23:57:44.413136 env[1499]: time="2025-08-12T23:57:44.413118472Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 12 23:57:44.419043 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2074247790-merged.mount: Deactivated successfully. Aug 12 23:57:44.631072 env[1499]: time="2025-08-12T23:57:44.631020902Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 12 23:57:44.631072 env[1499]: time="2025-08-12T23:57:44.631051041Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 12 23:57:44.631337 env[1499]: time="2025-08-12T23:57:44.631315643Z" level=info msg="Loading containers: start." Aug 12 23:57:44.694000 audit[1533]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.694000 audit[1533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff2550a00 a2=0 a3=1 items=0 ppid=1499 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.694000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 12 23:57:44.696000 audit[1535]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.696000 audit[1535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffda7f1730 a2=0 a3=1 items=0 ppid=1499 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.696000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 12 23:57:44.698000 audit[1537]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.698000 audit[1537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc7c9af90 a2=0 a3=1 items=0 ppid=1499 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.698000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 12 23:57:44.701000 audit[1539]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.701000 audit[1539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc0d5a6b0 a2=0 a3=1 items=0 ppid=1499 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.701000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 12 23:57:44.706000 audit[1541]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.706000 audit[1541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffddea19c0 a2=0 a3=1 items=0 ppid=1499 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.706000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 12 23:57:44.733000 audit[1546]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.733000 audit[1546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe9ce89b0 a2=0 a3=1 items=0 ppid=1499 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.733000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 12 23:57:44.741000 audit[1548]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.741000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe18b53d0 a2=0 a3=1 items=0 ppid=1499 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.741000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 12 23:57:44.743000 audit[1550]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.743000 audit[1550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffeb23e160 a2=0 a3=1 items=0 ppid=1499 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.743000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 12 23:57:44.746000 audit[1552]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.746000 audit[1552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc4be9500 a2=0 a3=1 items=0 ppid=1499 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.746000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 12 23:57:44.756000 audit[1556]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.756000 audit[1556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc87c0890 a2=0 a3=1 items=0 ppid=1499 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.756000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 12 23:57:44.768000 audit[1557]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.768000 audit[1557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc7412570 a2=0 a3=1 items=0 ppid=1499 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.768000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 12 23:57:44.780669 kernel: Initializing XFRM netlink socket Aug 12 23:57:44.810220 env[1499]: time="2025-08-12T23:57:44.810156704Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 12 23:57:44.827000 audit[1566]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.827000 audit[1566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffe8bfbe50 a2=0 a3=1 items=0 ppid=1499 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.827000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 12 23:57:44.841000 audit[1569]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.841000 audit[1569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcc67eb10 a2=0 a3=1 items=0 ppid=1499 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.841000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 12 23:57:44.845000 audit[1572]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.845000 audit[1572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffcc2f180 a2=0 a3=1 items=0 ppid=1499 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.845000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 12 23:57:44.847000 audit[1574]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.847000 audit[1574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe4125700 a2=0 a3=1 items=0 ppid=1499 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.847000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 12 23:57:44.849000 audit[1576]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.849000 audit[1576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffe6ebb7b0 a2=0 a3=1 items=0 ppid=1499 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.849000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 12 23:57:44.851000 audit[1578]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.851000 audit[1578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe6635330 a2=0 a3=1 items=0 ppid=1499 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.851000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 12 23:57:44.854000 audit[1580]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.854000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffcacf9290 a2=0 a3=1 items=0 ppid=1499 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.854000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 12 23:57:44.862000 audit[1583]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.862000 audit[1583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffff58aec70 a2=0 a3=1 items=0 ppid=1499 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.862000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 12 23:57:44.864000 audit[1585]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.864000 audit[1585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffe6cc6ed0 a2=0 a3=1 items=0 ppid=1499 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.864000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 12 23:57:44.868000 audit[1587]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.868000 audit[1587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc7974fc0 a2=0 a3=1 items=0 ppid=1499 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.868000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 12 23:57:44.870000 audit[1589]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.870000 audit[1589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffdc69cf0 a2=0 a3=1 items=0 ppid=1499 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.870000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 12 23:57:44.871879 systemd-networkd[1099]: docker0: Link UP Aug 12 23:57:44.880000 audit[1593]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.880000 audit[1593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffda9ae6b0 a2=0 a3=1 items=0 ppid=1499 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.880000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 12 23:57:44.891000 audit[1594]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:57:44.891000 audit[1594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcd2d2200 a2=0 a3=1 items=0 ppid=1499 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:57:44.891000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 12 23:57:44.892532 env[1499]: time="2025-08-12T23:57:44.892484468Z" level=info msg="Loading containers: done." Aug 12 23:57:44.919978 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1180348357-merged.mount: Deactivated successfully. Aug 12 23:57:44.932208 env[1499]: time="2025-08-12T23:57:44.932151463Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:57:44.932631 env[1499]: time="2025-08-12T23:57:44.932607894Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 12 23:57:44.932888 env[1499]: time="2025-08-12T23:57:44.932867425Z" level=info msg="Daemon has completed initialization" Aug 12 23:57:44.961324 systemd[1]: Started docker.service. Aug 12 23:57:44.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:44.970272 env[1499]: time="2025-08-12T23:57:44.969943328Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:57:45.685990 env[1323]: time="2025-08-12T23:57:45.685908651Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 12 23:57:46.364061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164511144.mount: Deactivated successfully. Aug 12 23:57:47.574186 env[1323]: time="2025-08-12T23:57:47.574134111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:47.579197 env[1323]: time="2025-08-12T23:57:47.579137241Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:47.581962 env[1323]: time="2025-08-12T23:57:47.581894056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:47.584760 env[1323]: time="2025-08-12T23:57:47.584711721Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:47.585698 env[1323]: time="2025-08-12T23:57:47.585629434Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 12 23:57:47.599330 env[1323]: time="2025-08-12T23:57:47.599282241Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 12 23:57:49.003310 env[1323]: time="2025-08-12T23:57:49.003206663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:49.005722 env[1323]: time="2025-08-12T23:57:49.005662469Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:49.008228 env[1323]: time="2025-08-12T23:57:49.008182616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:49.011605 env[1323]: time="2025-08-12T23:57:49.011552913Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:49.012730 env[1323]: time="2025-08-12T23:57:49.012675499Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 12 23:57:49.013314 env[1323]: time="2025-08-12T23:57:49.013259736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 12 23:57:50.367862 env[1323]: time="2025-08-12T23:57:50.367774387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:50.370445 env[1323]: time="2025-08-12T23:57:50.370399265Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:50.373687 env[1323]: time="2025-08-12T23:57:50.373605963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:50.375160 env[1323]: time="2025-08-12T23:57:50.375108257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:50.376117 env[1323]: time="2025-08-12T23:57:50.376079649Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 12 23:57:50.377262 env[1323]: time="2025-08-12T23:57:50.377229284Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 12 23:57:50.853462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:57:50.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:50.853660 systemd[1]: Stopped kubelet.service. Aug 12 23:57:50.855833 kernel: kauditd_printk_skb: 84 callbacks suppressed Aug 12 23:57:50.855940 kernel: audit: type=1130 audit(1755043070.852:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:50.855296 systemd[1]: Starting kubelet.service... Aug 12 23:57:50.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:50.859439 kernel: audit: type=1131 audit(1755043070.852:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:50.968009 systemd[1]: Started kubelet.service. Aug 12 23:57:50.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:50.970682 kernel: audit: type=1130 audit(1755043070.967:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:57:51.035221 kubelet[1640]: E0812 23:57:51.035161 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:57:51.037389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:57:51.037551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:57:51.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 12 23:57:51.040661 kernel: audit: type=1131 audit(1755043071.037:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 12 23:57:51.541379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559398733.mount: Deactivated successfully. Aug 12 23:57:52.339478 env[1323]: time="2025-08-12T23:57:52.339427413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:52.343722 env[1323]: time="2025-08-12T23:57:52.343613370Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:52.345239 env[1323]: time="2025-08-12T23:57:52.345210275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:52.346509 env[1323]: time="2025-08-12T23:57:52.346470516Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:52.347189 env[1323]: time="2025-08-12T23:57:52.347159239Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 12 23:57:52.347723 env[1323]: time="2025-08-12T23:57:52.347612224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:57:52.949852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732495322.mount: Deactivated successfully. Aug 12 23:57:53.884580 env[1323]: time="2025-08-12T23:57:53.884526864Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:53.889964 env[1323]: time="2025-08-12T23:57:53.889905385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:53.893404 env[1323]: time="2025-08-12T23:57:53.893351784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:53.895339 env[1323]: time="2025-08-12T23:57:53.895302465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:53.896335 env[1323]: time="2025-08-12T23:57:53.896296544Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 12 23:57:53.897952 env[1323]: time="2025-08-12T23:57:53.897896262Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:57:54.640924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265553173.mount: Deactivated successfully. Aug 12 23:57:54.652089 env[1323]: time="2025-08-12T23:57:54.651226130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:54.656103 env[1323]: time="2025-08-12T23:57:54.654654690Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:54.657318 env[1323]: time="2025-08-12T23:57:54.657243665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:54.660199 env[1323]: time="2025-08-12T23:57:54.660124439Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:54.662456 env[1323]: time="2025-08-12T23:57:54.661000759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:57:54.662456 env[1323]: time="2025-08-12T23:57:54.662052248Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 12 23:57:55.186366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1381179117.mount: Deactivated successfully. Aug 12 23:57:57.367647 env[1323]: time="2025-08-12T23:57:57.367565534Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:57.393668 env[1323]: time="2025-08-12T23:57:57.393602200Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:57.411523 env[1323]: time="2025-08-12T23:57:57.411457612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:57.415342 env[1323]: time="2025-08-12T23:57:57.415269588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:57:57.416216 env[1323]: time="2025-08-12T23:57:57.416139666Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 12 23:58:01.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:01.242063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:58:01.242260 systemd[1]: Stopped kubelet.service. Aug 12 23:58:01.243868 systemd[1]: Starting kubelet.service... Aug 12 23:58:01.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:01.247233 kernel: audit: type=1130 audit(1755043081.240:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:01.247323 kernel: audit: type=1131 audit(1755043081.240:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:01.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:01.358496 systemd[1]: Started kubelet.service. Aug 12 23:58:01.361690 kernel: audit: type=1130 audit(1755043081.357:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:01.406698 kubelet[1676]: E0812 23:58:01.406623 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:58:01.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 12 23:58:01.408537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:58:01.408704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:58:01.411656 kernel: audit: type=1131 audit(1755043081.407:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 12 23:58:02.341020 systemd[1]: Stopped kubelet.service. Aug 12 23:58:02.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.344580 systemd[1]: Starting kubelet.service... Aug 12 23:58:02.345572 kernel: audit: type=1130 audit(1755043082.341:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.345650 kernel: audit: type=1131 audit(1755043082.341:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.369309 systemd[1]: Reloading. Aug 12 23:58:02.431837 /usr/lib/systemd/system-generators/torcx-generator[1712]: time="2025-08-12T23:58:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 12 23:58:02.431865 /usr/lib/systemd/system-generators/torcx-generator[1712]: time="2025-08-12T23:58:02Z" level=info msg="torcx already run" Aug 12 23:58:02.527984 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 12 23:58:02.528005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 12 23:58:02.546148 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:58:02.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.604863 systemd[1]: Started kubelet.service. Aug 12 23:58:02.608688 kernel: audit: type=1130 audit(1755043082.605:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.613742 systemd[1]: Stopping kubelet.service... Aug 12 23:58:02.614316 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:58:02.614577 systemd[1]: Stopped kubelet.service. Aug 12 23:58:02.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.617650 kernel: audit: type=1131 audit(1755043082.613:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.619063 systemd[1]: Starting kubelet.service... Aug 12 23:58:02.721481 systemd[1]: Started kubelet.service. Aug 12 23:58:02.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.724647 kernel: audit: type=1130 audit(1755043082.720:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:02.762881 kubelet[1775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:58:02.762881 kubelet[1775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:58:02.762881 kubelet[1775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:58:02.763410 kubelet[1775]: I0812 23:58:02.762927 1775 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:58:04.522903 kubelet[1775]: I0812 23:58:04.522864 1775 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:58:04.522903 kubelet[1775]: I0812 23:58:04.522900 1775 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:58:04.523338 kubelet[1775]: I0812 23:58:04.523321 1775 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:58:04.572589 kubelet[1775]: E0812 23:58:04.572550 1775 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:04.574121 kubelet[1775]: I0812 23:58:04.574099 1775 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:58:04.586941 kubelet[1775]: E0812 23:58:04.586897 1775 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:58:04.586941 kubelet[1775]: I0812 23:58:04.586939 1775 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:58:04.590985 kubelet[1775]: I0812 23:58:04.590950 1775 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:58:04.592264 kubelet[1775]: I0812 23:58:04.592225 1775 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:58:04.592434 kubelet[1775]: I0812 23:58:04.592398 1775 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:58:04.592610 kubelet[1775]: I0812 23:58:04.592432 1775 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 12 23:58:04.592701 kubelet[1775]: I0812 23:58:04.592685 1775 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:58:04.592701 kubelet[1775]: I0812 23:58:04.592695 1775 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:58:04.593004 kubelet[1775]: I0812 23:58:04.592980 1775 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:58:04.601165 kubelet[1775]: I0812 23:58:04.601131 1775 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:58:04.601230 kubelet[1775]: I0812 23:58:04.601177 1775 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:58:04.601230 kubelet[1775]: I0812 23:58:04.601213 1775 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:58:04.601290 kubelet[1775]: I0812 23:58:04.601230 1775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:58:04.626197 kubelet[1775]: W0812 23:58:04.626024 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 12 23:58:04.626470 kubelet[1775]: W0812 23:58:04.626030 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 12 23:58:04.628563 kubelet[1775]: E0812 23:58:04.626368 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:04.629621 kubelet[1775]: E0812 23:58:04.629581 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:04.631432 kubelet[1775]: I0812 23:58:04.631385 1775 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 12 23:58:04.632641 kubelet[1775]: I0812 23:58:04.632611 1775 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:58:04.633008 kubelet[1775]: W0812 23:58:04.632995 1775 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:58:04.634338 kubelet[1775]: I0812 23:58:04.634319 1775 server.go:1274] "Started kubelet" Aug 12 23:58:04.634000 audit[1775]: AVC avc: denied { mac_admin } for pid=1775 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:04.636452 kubelet[1775]: I0812 23:58:04.636394 1775 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 12 23:58:04.636496 kubelet[1775]: I0812 23:58:04.636452 1775 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 12 23:58:04.636533 kubelet[1775]: I0812 23:58:04.636521 1775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:58:04.634000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 12 23:58:04.634000 audit[1775]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000af6d80 a1=400070aa08 a2=4000af6d50 a3=25 items=0 ppid=1 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 12 23:58:04.634000 audit[1775]: AVC avc: denied { mac_admin } for pid=1775 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:04.634000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 12 23:58:04.634000 audit[1775]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000501000 a1=400070aa20 a2=4000af6e10 a3=25 items=0 ppid=1 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 12 23:58:04.640327 kernel: audit: type=1400 audit(1755043084.634:211): avc: denied { mac_admin } for pid=1775 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:04.638000 audit[1788]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:04.638000 audit[1788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffa574110 a2=0 a3=1 items=0 ppid=1775 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 12 23:58:04.639000 audit[1789]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:04.639000 audit[1789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5f669d0 a2=0 a3=1 items=0 ppid=1775 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 12 23:58:04.645196 kubelet[1775]: I0812 23:58:04.645154 1775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:58:04.646339 kubelet[1775]: I0812 23:58:04.646309 1775 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:58:04.646585 kubelet[1775]: I0812 23:58:04.646568 1775 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:58:04.647343 kubelet[1775]: E0812 23:58:04.647219 1775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Aug 12 23:58:04.647866 kubelet[1775]: I0812 23:58:04.647421 1775 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:58:04.647866 kubelet[1775]: I0812 23:58:04.647464 1775 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:58:04.647866 kubelet[1775]: W0812 23:58:04.647774 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 12 23:58:04.647866 kubelet[1775]: E0812 23:58:04.647824 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:04.645000 audit[1791]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:04.645000 audit[1791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff86621f0 a2=0 a3=1 items=0 ppid=1775 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 12 23:58:04.649189 kubelet[1775]: I0812 23:58:04.648550 1775 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:58:04.649189 kubelet[1775]: I0812 23:58:04.648644 1775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:58:04.649400 kubelet[1775]: E0812 23:58:04.646659 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:58:04.649471 kubelet[1775]: I0812 23:58:04.649419 1775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:58:04.649697 kubelet[1775]: I0812 23:58:04.649682 1775 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:58:04.649810 kubelet[1775]: I0812 23:58:04.645290 1775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:58:04.651204 kubelet[1775]: E0812 23:58:04.651181 1775 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:58:04.653707 kubelet[1775]: I0812 23:58:04.653684 1775 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:58:04.654000 audit[1793]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:04.654000 audit[1793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd7dcb460 a2=0 a3=1 items=0 ppid=1775 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 12 23:58:04.664697 kubelet[1775]: E0812 23:58:04.657919 1775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a630922125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:58:04.634296924 +0000 UTC m=+1.907639755,LastTimestamp:2025-08-12 23:58:04.634296924 +0000 UTC m=+1.907639755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:58:04.667000 audit[1800]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:04.667000 audit[1800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffa3edd00 a2=0 a3=1 items=0 ppid=1775 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 12 23:58:04.669937 kubelet[1775]: I0812 23:58:04.669879 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:58:04.669000 audit[1801]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:04.669000 audit[1801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcb12ec50 a2=0 a3=1 items=0 ppid=1775 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 12 23:58:04.669000 audit[1802]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:04.669000 audit[1802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffdf5b8e0 a2=0 a3=1 items=0 ppid=1775 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 12 23:58:04.671334 kubelet[1775]: I0812 23:58:04.671293 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:58:04.671334 kubelet[1775]: I0812 23:58:04.671325 1775 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:58:04.671409 kubelet[1775]: I0812 23:58:04.671345 1775 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:58:04.671409 kubelet[1775]: E0812 23:58:04.671396 1775 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:58:04.672433 kubelet[1775]: W0812 23:58:04.672254 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 12 23:58:04.672433 kubelet[1775]: E0812 23:58:04.672318 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:04.670000 audit[1804]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:04.670000 audit[1804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3fc9e40 a2=0 a3=1 items=0 ppid=1775 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 12 23:58:04.672000 audit[1803]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:04.672000 audit[1803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff31af550 a2=0 a3=1 items=0 ppid=1775 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 12 23:58:04.672000 audit[1805]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:04.672000 audit[1805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffe65094e0 a2=0 a3=1 items=0 ppid=1775 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 12 23:58:04.673000 audit[1806]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:04.673000 audit[1806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff1b147d0 a2=0 a3=1 items=0 ppid=1775 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 12 23:58:04.673000 audit[1807]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:04.673000 audit[1807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff88a74f0 a2=0 a3=1 items=0 ppid=1775 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 12 23:58:04.677844 kubelet[1775]: I0812 23:58:04.677824 1775 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:58:04.677844 kubelet[1775]: I0812 23:58:04.677840 1775 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:58:04.677938 kubelet[1775]: I0812 23:58:04.677860 1775 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:58:04.749547 kubelet[1775]: E0812 23:58:04.749497 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:58:04.772098 kubelet[1775]: E0812 23:58:04.772063 1775 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:58:04.790932 kubelet[1775]: I0812 23:58:04.790848 1775 policy_none.go:49] "None policy: Start" Aug 12 23:58:04.791724 kubelet[1775]: I0812 23:58:04.791698 1775 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:58:04.791851 kubelet[1775]: I0812 23:58:04.791840 1775 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:58:04.802284 kubelet[1775]: I0812 23:58:04.802254 1775 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:58:04.800000 audit[1775]: AVC avc: denied { mac_admin } for pid=1775 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:04.800000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 12 23:58:04.800000 audit[1775]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ce5c50 a1=4000d22258 a2=4000ce5c20 a3=25 items=0 ppid=1 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:04.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 12 23:58:04.802703 kubelet[1775]: I0812 23:58:04.802681 1775 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 12 23:58:04.802881 kubelet[1775]: I0812 23:58:04.802867 1775 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:58:04.802975 kubelet[1775]: I0812 23:58:04.802939 1775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:58:04.803311 kubelet[1775]: I0812 23:58:04.803294 1775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:58:04.804667 kubelet[1775]: E0812 23:58:04.804603 1775 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:58:04.848295 kubelet[1775]: E0812 23:58:04.848251 1775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Aug 12 23:58:04.905976 kubelet[1775]: I0812 23:58:04.905948 1775 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:04.906840 kubelet[1775]: E0812 23:58:04.906816 1775 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Aug 12 23:58:05.050844 kubelet[1775]: I0812 23:58:05.050741 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:05.051024 kubelet[1775]: I0812 23:58:05.051003 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:58:05.051120 kubelet[1775]: I0812 23:58:05.051108 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:05.051217 kubelet[1775]: I0812 23:58:05.051203 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:05.051304 kubelet[1775]: I0812 23:58:05.051290 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b55c49f7232e59b2686525f1adcbb089-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b55c49f7232e59b2686525f1adcbb089\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:05.051379 kubelet[1775]: I0812 23:58:05.051368 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:05.051453 kubelet[1775]: I0812 23:58:05.051443 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:05.051541 kubelet[1775]: I0812 23:58:05.051528 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b55c49f7232e59b2686525f1adcbb089-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b55c49f7232e59b2686525f1adcbb089\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:05.051643 kubelet[1775]: I0812 23:58:05.051611 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b55c49f7232e59b2686525f1adcbb089-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b55c49f7232e59b2686525f1adcbb089\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:05.108213 kubelet[1775]: I0812 23:58:05.108189 1775 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:05.108777 kubelet[1775]: E0812 23:58:05.108747 1775 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Aug 12 23:58:05.249525 kubelet[1775]: E0812 23:58:05.249466 1775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Aug 12 23:58:05.279189 kubelet[1775]: E0812 23:58:05.279145 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:05.279298 kubelet[1775]: E0812 23:58:05.279142 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:05.279622 kubelet[1775]: E0812 23:58:05.279599 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:05.279934 env[1323]: time="2025-08-12T23:58:05.279894332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b55c49f7232e59b2686525f1adcbb089,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:05.280211 env[1323]: time="2025-08-12T23:58:05.279942053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:05.280400 env[1323]: time="2025-08-12T23:58:05.280365254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:05.510602 kubelet[1775]: I0812 23:58:05.510539 1775 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:05.510949 kubelet[1775]: E0812 23:58:05.510919 1775 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Aug 12 23:58:05.512472 kubelet[1775]: W0812 23:58:05.512408 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 12 23:58:05.512524 kubelet[1775]: E0812 23:58:05.512482 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:05.535109 kubelet[1775]: W0812 23:58:05.535037 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 12 23:58:05.535416 kubelet[1775]: E0812 23:58:05.535113 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:05.631706 kubelet[1775]: W0812 23:58:05.631595 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 12 23:58:05.631815 kubelet[1775]: E0812 23:58:05.631716 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:05.837827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797734274.mount: Deactivated successfully. Aug 12 23:58:05.845538 env[1323]: time="2025-08-12T23:58:05.844428538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.847347 env[1323]: time="2025-08-12T23:58:05.846099044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.850887 env[1323]: time="2025-08-12T23:58:05.850835408Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.852800 env[1323]: time="2025-08-12T23:58:05.852149650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.854371 env[1323]: time="2025-08-12T23:58:05.854322705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.855470 env[1323]: time="2025-08-12T23:58:05.855431372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.857596 env[1323]: time="2025-08-12T23:58:05.857545537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.860380 env[1323]: time="2025-08-12T23:58:05.860270343Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.862843 env[1323]: time="2025-08-12T23:58:05.862783528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.864344 env[1323]: time="2025-08-12T23:58:05.864305548Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.865153 env[1323]: time="2025-08-12T23:58:05.865124727Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.867152 env[1323]: time="2025-08-12T23:58:05.867016342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:05.911981 env[1323]: time="2025-08-12T23:58:05.911327933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:05.911981 env[1323]: time="2025-08-12T23:58:05.911380778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:05.911981 env[1323]: time="2025-08-12T23:58:05.911392748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:05.912999 env[1323]: time="2025-08-12T23:58:05.912662672Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fed5cdc3747379a8875fee2f41b488d138225b0c25f46f4651804d056dd63265 pid=1821 runtime=io.containerd.runc.v2 Aug 12 23:58:05.917214 env[1323]: time="2025-08-12T23:58:05.915310453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:05.917214 env[1323]: time="2025-08-12T23:58:05.915348685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:05.917214 env[1323]: time="2025-08-12T23:58:05.915358934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:05.917214 env[1323]: time="2025-08-12T23:58:05.915571716Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b94a429553f716b0a18a754e42b12544c1287d8d000149802e2a64ce254fee24 pid=1839 runtime=io.containerd.runc.v2 Aug 12 23:58:05.917214 env[1323]: time="2025-08-12T23:58:05.916300018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:05.917214 env[1323]: time="2025-08-12T23:58:05.916347778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:05.917214 env[1323]: time="2025-08-12T23:58:05.916359508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:05.917214 env[1323]: time="2025-08-12T23:58:05.916526331Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29bee0c50ec84e88088f0d43307e47b41b6c4fdb13e226a47c6eba752de6f76b pid=1840 runtime=io.containerd.runc.v2 Aug 12 23:58:06.019052 env[1323]: time="2025-08-12T23:58:06.019008864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b94a429553f716b0a18a754e42b12544c1287d8d000149802e2a64ce254fee24\"" Aug 12 23:58:06.020337 kubelet[1775]: E0812 23:58:06.020297 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:06.024371 env[1323]: time="2025-08-12T23:58:06.024317831Z" level=info msg="CreateContainer within sandbox \"b94a429553f716b0a18a754e42b12544c1287d8d000149802e2a64ce254fee24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:58:06.028985 env[1323]: time="2025-08-12T23:58:06.028937482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b55c49f7232e59b2686525f1adcbb089,Namespace:kube-system,Attempt:0,} returns sandbox id \"29bee0c50ec84e88088f0d43307e47b41b6c4fdb13e226a47c6eba752de6f76b\"" Aug 12 23:58:06.030229 kubelet[1775]: E0812 23:58:06.030193 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:06.033327 env[1323]: time="2025-08-12T23:58:06.033275643Z" level=info msg="CreateContainer within sandbox \"29bee0c50ec84e88088f0d43307e47b41b6c4fdb13e226a47c6eba752de6f76b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:58:06.041647 env[1323]: time="2025-08-12T23:58:06.041578086Z" level=info msg="CreateContainer within sandbox \"b94a429553f716b0a18a754e42b12544c1287d8d000149802e2a64ce254fee24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4690e97b2ea2eb704fc8882282b1fc0b6d3b92563f11b3eb69bc845e68c1bb9f\"" Aug 12 23:58:06.042035 env[1323]: time="2025-08-12T23:58:06.042006166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"fed5cdc3747379a8875fee2f41b488d138225b0c25f46f4651804d056dd63265\"" Aug 12 23:58:06.042783 env[1323]: time="2025-08-12T23:58:06.042742596Z" level=info msg="StartContainer for \"4690e97b2ea2eb704fc8882282b1fc0b6d3b92563f11b3eb69bc845e68c1bb9f\"" Aug 12 23:58:06.043185 kubelet[1775]: E0812 23:58:06.043162 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:06.046165 env[1323]: time="2025-08-12T23:58:06.046122601Z" level=info msg="CreateContainer within sandbox \"fed5cdc3747379a8875fee2f41b488d138225b0c25f46f4651804d056dd63265\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:58:06.050087 kubelet[1775]: E0812 23:58:06.050047 1775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="1.6s" Aug 12 23:58:06.056938 env[1323]: time="2025-08-12T23:58:06.056851937Z" level=info msg="CreateContainer within sandbox \"29bee0c50ec84e88088f0d43307e47b41b6c4fdb13e226a47c6eba752de6f76b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f7747f99767dce6b8f6588f822f0fb4795dd64254e1c67d60b2dba55a4eea44d\"" Aug 12 23:58:06.057492 env[1323]: time="2025-08-12T23:58:06.057450184Z" level=info msg="StartContainer for \"f7747f99767dce6b8f6588f822f0fb4795dd64254e1c67d60b2dba55a4eea44d\"" Aug 12 23:58:06.067886 kubelet[1775]: W0812 23:58:06.067810 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 12 23:58:06.067886 kubelet[1775]: E0812 23:58:06.067886 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:06.068091 env[1323]: time="2025-08-12T23:58:06.068036853Z" level=info msg="CreateContainer within sandbox \"fed5cdc3747379a8875fee2f41b488d138225b0c25f46f4651804d056dd63265\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e2a37f3aa9c68cb660bde0d56e2aca367df89990308c69e227df6bfa1aab69c\"" Aug 12 23:58:06.071041 env[1323]: time="2025-08-12T23:58:06.070998626Z" level=info msg="StartContainer for \"1e2a37f3aa9c68cb660bde0d56e2aca367df89990308c69e227df6bfa1aab69c\"" Aug 12 23:58:06.185055 env[1323]: time="2025-08-12T23:58:06.184166414Z" level=info msg="StartContainer for \"f7747f99767dce6b8f6588f822f0fb4795dd64254e1c67d60b2dba55a4eea44d\" returns successfully" Aug 12 23:58:06.192637 env[1323]: time="2025-08-12T23:58:06.192569772Z" level=info msg="StartContainer for \"4690e97b2ea2eb704fc8882282b1fc0b6d3b92563f11b3eb69bc845e68c1bb9f\" returns successfully" Aug 12 23:58:06.227854 env[1323]: time="2025-08-12T23:58:06.227798011Z" level=info msg="StartContainer for \"1e2a37f3aa9c68cb660bde0d56e2aca367df89990308c69e227df6bfa1aab69c\" returns successfully" Aug 12 23:58:06.313052 kubelet[1775]: I0812 23:58:06.313022 1775 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:06.313573 kubelet[1775]: E0812 23:58:06.313547 1775 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Aug 12 23:58:06.618481 kubelet[1775]: E0812 23:58:06.618439 1775 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:06.679737 kubelet[1775]: E0812 23:58:06.679709 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:06.682242 kubelet[1775]: E0812 23:58:06.682216 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:06.684504 kubelet[1775]: E0812 23:58:06.684476 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:07.686619 kubelet[1775]: E0812 23:58:07.686589 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:07.687347 kubelet[1775]: E0812 23:58:07.687325 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:07.687850 kubelet[1775]: E0812 23:58:07.687828 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:07.915935 kubelet[1775]: I0812 23:58:07.915904 1775 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:08.053098 kubelet[1775]: E0812 23:58:08.053011 1775 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:58:08.101179 kubelet[1775]: I0812 23:58:08.100537 1775 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:58:08.155068 kubelet[1775]: E0812 23:58:08.154959 1775 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185b2a630922125c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:58:04.634296924 +0000 UTC m=+1.907639755,LastTimestamp:2025-08-12 23:58:04.634296924 +0000 UTC m=+1.907639755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:58:08.627446 kubelet[1775]: I0812 23:58:08.627411 1775 apiserver.go:52] "Watching apiserver" Aug 12 23:58:08.648429 kubelet[1775]: I0812 23:58:08.648390 1775 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:58:10.204030 systemd[1]: Reloading. Aug 12 23:58:10.267987 /usr/lib/systemd/system-generators/torcx-generator[2074]: time="2025-08-12T23:58:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 12 23:58:10.268022 /usr/lib/systemd/system-generators/torcx-generator[2074]: time="2025-08-12T23:58:10Z" level=info msg="torcx already run" Aug 12 23:58:10.367417 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 12 23:58:10.367436 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 12 23:58:10.388419 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:58:10.473638 systemd[1]: Stopping kubelet.service... Aug 12 23:58:10.500988 kernel: kauditd_printk_skb: 47 callbacks suppressed Aug 12 23:58:10.501067 kernel: audit: type=1131 audit(1755043090.496:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:10.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:10.498150 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:58:10.498449 systemd[1]: Stopped kubelet.service. Aug 12 23:58:10.500273 systemd[1]: Starting kubelet.service... Aug 12 23:58:10.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:10.597519 systemd[1]: Started kubelet.service. Aug 12 23:58:10.600707 kernel: audit: type=1130 audit(1755043090.596:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:10.637333 kubelet[2127]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:58:10.637333 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:58:10.637333 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:58:10.638638 kubelet[2127]: I0812 23:58:10.637367 2127 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:58:10.648006 kubelet[2127]: I0812 23:58:10.646940 2127 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:58:10.648006 kubelet[2127]: I0812 23:58:10.646970 2127 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:58:10.648006 kubelet[2127]: I0812 23:58:10.647214 2127 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:58:10.648701 kubelet[2127]: I0812 23:58:10.648681 2127 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:58:10.653403 kubelet[2127]: I0812 23:58:10.651117 2127 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:58:10.662788 kubelet[2127]: E0812 23:58:10.662755 2127 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:58:10.662788 kubelet[2127]: I0812 23:58:10.662787 2127 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:58:10.665240 kubelet[2127]: I0812 23:58:10.665203 2127 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:58:10.665592 kubelet[2127]: I0812 23:58:10.665568 2127 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:58:10.665796 kubelet[2127]: I0812 23:58:10.665753 2127 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:58:10.665994 kubelet[2127]: I0812 23:58:10.665789 2127 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 12 23:58:10.665994 kubelet[2127]: I0812 23:58:10.665994 2127 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:58:10.666123 kubelet[2127]: I0812 23:58:10.666003 2127 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:58:10.666123 kubelet[2127]: I0812 23:58:10.666037 2127 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:58:10.666170 kubelet[2127]: I0812 23:58:10.666129 2127 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:58:10.666170 kubelet[2127]: I0812 23:58:10.666145 2127 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:58:10.666170 kubelet[2127]: I0812 23:58:10.666163 2127 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:58:10.666249 kubelet[2127]: I0812 23:58:10.666175 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:58:10.669906 kubelet[2127]: I0812 23:58:10.669130 2127 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 12 23:58:10.670051 kubelet[2127]: I0812 23:58:10.670024 2127 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:58:10.670460 kubelet[2127]: I0812 23:58:10.670434 2127 server.go:1274] "Started kubelet" Aug 12 23:58:10.671795 kubelet[2127]: I0812 23:58:10.671765 2127 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 12 23:58:10.671883 kubelet[2127]: I0812 23:58:10.671802 2127 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 12 23:58:10.671883 kubelet[2127]: I0812 23:58:10.671824 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:58:10.677254 kernel: audit: type=1400 audit(1755043090.670:228): avc: denied { mac_admin } for pid=2127 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:10.677346 kernel: audit: type=1401 audit(1755043090.670:228): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 12 23:58:10.677365 kernel: audit: type=1300 audit(1755043090.670:228): arch=c00000b7 syscall=5 success=no exit=-22 a0=40009bc690 a1=40009d00a8 a2=40009bc660 a3=25 items=0 ppid=1 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:10.670000 audit[2127]: AVC avc: denied { mac_admin } for pid=2127 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:10.670000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 12 23:58:10.670000 audit[2127]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009bc690 a1=40009d00a8 a2=40009bc660 a3=25 items=0 ppid=1 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:10.677499 kubelet[2127]: I0812 23:58:10.672957 2127 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:58:10.677499 kubelet[2127]: I0812 23:58:10.673787 2127 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:58:10.677499 kubelet[2127]: I0812 23:58:10.674677 2127 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:58:10.677499 kubelet[2127]: I0812 23:58:10.674862 2127 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:58:10.677499 kubelet[2127]: I0812 23:58:10.675309 2127 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:58:10.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 12 23:58:10.686753 kubelet[2127]: E0812 23:58:10.686720 2127 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:58:10.687105 kernel: audit: type=1327 audit(1755043090.670:228): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 12 23:58:10.687210 kernel: audit: type=1400 audit(1755043090.670:229): avc: denied { mac_admin } for pid=2127 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:10.670000 audit[2127]: AVC avc: denied { mac_admin } for pid=2127 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:10.687613 kubelet[2127]: I0812 23:58:10.687584 2127 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:58:10.688036 kubelet[2127]: I0812 23:58:10.688011 2127 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:58:10.688441 kubelet[2127]: I0812 23:58:10.688424 2127 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:58:10.670000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 12 23:58:10.690039 kernel: audit: type=1401 audit(1755043090.670:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 12 23:58:10.690088 kernel: audit: type=1300 audit(1755043090.670:229): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b129a0 a1=40009d00c0 a2=40009bc720 a3=25 items=0 ppid=1 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:10.670000 audit[2127]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b129a0 a1=40009d00c0 a2=40009bc720 a3=25 items=0 ppid=1 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:10.690663 kubelet[2127]: I0812 23:58:10.690644 2127 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:58:10.690908 kubelet[2127]: I0812 23:58:10.690886 2127 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:58:10.691610 kubelet[2127]: E0812 23:58:10.691583 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:58:10.693013 kubelet[2127]: I0812 23:58:10.692995 2127 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:58:10.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 12 23:58:10.697264 kernel: audit: type=1327 audit(1755043090.670:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 12 23:58:10.703522 kubelet[2127]: I0812 23:58:10.703470 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:58:10.704618 kubelet[2127]: I0812 23:58:10.704585 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:58:10.704618 kubelet[2127]: I0812 23:58:10.704619 2127 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:58:10.704753 kubelet[2127]: I0812 23:58:10.704659 2127 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:58:10.704753 kubelet[2127]: E0812 23:58:10.704709 2127 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:58:10.743138 kubelet[2127]: I0812 23:58:10.741741 2127 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:58:10.743138 kubelet[2127]: I0812 23:58:10.741758 2127 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:58:10.743138 kubelet[2127]: I0812 23:58:10.741776 2127 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:58:10.743138 kubelet[2127]: I0812 23:58:10.741952 2127 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:58:10.743138 kubelet[2127]: I0812 23:58:10.741964 2127 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:58:10.743138 kubelet[2127]: I0812 23:58:10.741985 2127 policy_none.go:49] "None policy: Start" Aug 12 23:58:10.743138 kubelet[2127]: I0812 23:58:10.742507 2127 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:58:10.743138 kubelet[2127]: I0812 23:58:10.742528 2127 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:58:10.743138 kubelet[2127]: I0812 23:58:10.742666 2127 state_mem.go:75] "Updated machine memory state" Aug 12 23:58:10.743785 kubelet[2127]: I0812 23:58:10.743757 2127 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:58:10.742000 audit[2127]: AVC avc: denied { mac_admin } for pid=2127 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:10.742000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 12 23:58:10.742000 audit[2127]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e2f860 a1=4000cf7668 a2=4000e2f830 a3=25 items=0 ppid=1 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:10.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 12 23:58:10.744002 kubelet[2127]: I0812 23:58:10.743820 2127 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 12 23:58:10.744002 kubelet[2127]: I0812 23:58:10.743951 2127 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:58:10.744002 kubelet[2127]: I0812 23:58:10.743962 2127 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:58:10.745240 kubelet[2127]: I0812 23:58:10.745209 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:58:10.847745 kubelet[2127]: I0812 23:58:10.847701 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:10.855308 kubelet[2127]: I0812 23:58:10.855267 2127 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 12 23:58:10.855448 kubelet[2127]: I0812 23:58:10.855373 2127 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:58:10.989722 kubelet[2127]: I0812 23:58:10.989682 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b55c49f7232e59b2686525f1adcbb089-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b55c49f7232e59b2686525f1adcbb089\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:10.989903 kubelet[2127]: I0812 23:58:10.989883 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b55c49f7232e59b2686525f1adcbb089-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b55c49f7232e59b2686525f1adcbb089\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:10.990062 kubelet[2127]: I0812 23:58:10.990045 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:10.990144 kubelet[2127]: I0812 23:58:10.990130 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:10.990216 kubelet[2127]: I0812 23:58:10.990204 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b55c49f7232e59b2686525f1adcbb089-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b55c49f7232e59b2686525f1adcbb089\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:10.990289 kubelet[2127]: I0812 23:58:10.990276 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:10.990395 kubelet[2127]: I0812 23:58:10.990363 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:10.990470 kubelet[2127]: I0812 23:58:10.990458 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:10.990542 kubelet[2127]: I0812 23:58:10.990530 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:58:11.115014 kubelet[2127]: E0812 23:58:11.114902 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:11.117032 kubelet[2127]: E0812 23:58:11.117005 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:11.117198 kubelet[2127]: E0812 23:58:11.117123 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:11.667700 kubelet[2127]: I0812 23:58:11.667649 2127 apiserver.go:52] "Watching apiserver" Aug 12 23:58:11.688826 kubelet[2127]: I0812 23:58:11.688793 2127 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:58:11.716523 kubelet[2127]: E0812 23:58:11.716495 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:11.716817 kubelet[2127]: E0812 23:58:11.716789 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:11.723114 kubelet[2127]: E0812 23:58:11.723083 2127 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:11.723398 kubelet[2127]: E0812 23:58:11.723382 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:11.758423 kubelet[2127]: I0812 23:58:11.758350 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.758332223 podStartE2EDuration="1.758332223s" podCreationTimestamp="2025-08-12 23:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:58:11.746144625 +0000 UTC m=+1.143649376" watchObservedRunningTime="2025-08-12 23:58:11.758332223 +0000 UTC m=+1.155836974" Aug 12 23:58:11.766483 kubelet[2127]: I0812 23:58:11.766429 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7664117350000001 podStartE2EDuration="1.766411735s" podCreationTimestamp="2025-08-12 23:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:58:11.758483969 +0000 UTC m=+1.155988720" watchObservedRunningTime="2025-08-12 23:58:11.766411735 +0000 UTC m=+1.163916486" Aug 12 23:58:12.718354 kubelet[2127]: E0812 23:58:12.717975 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:13.719756 kubelet[2127]: E0812 23:58:13.719362 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:13.951295 kubelet[2127]: E0812 23:58:13.950899 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:15.062212 kubelet[2127]: I0812 23:58:15.062155 2127 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:58:15.062558 env[1323]: time="2025-08-12T23:58:15.062475977Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:58:15.062761 kubelet[2127]: I0812 23:58:15.062696 2127 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:58:15.889506 kubelet[2127]: I0812 23:58:15.889443 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.889422702 podStartE2EDuration="5.889422702s" podCreationTimestamp="2025-08-12 23:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:58:11.7668344 +0000 UTC m=+1.164339151" watchObservedRunningTime="2025-08-12 23:58:15.889422702 +0000 UTC m=+5.286927453" Aug 12 23:58:15.922903 kubelet[2127]: I0812 23:58:15.922864 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df309a2f-9258-4dc7-9a00-08e578333460-xtables-lock\") pod \"kube-proxy-gb5gd\" (UID: \"df309a2f-9258-4dc7-9a00-08e578333460\") " pod="kube-system/kube-proxy-gb5gd" Aug 12 23:58:15.922903 kubelet[2127]: I0812 23:58:15.922905 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df309a2f-9258-4dc7-9a00-08e578333460-lib-modules\") pod \"kube-proxy-gb5gd\" (UID: \"df309a2f-9258-4dc7-9a00-08e578333460\") " pod="kube-system/kube-proxy-gb5gd" Aug 12 23:58:15.923096 kubelet[2127]: I0812 23:58:15.922927 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df309a2f-9258-4dc7-9a00-08e578333460-kube-proxy\") pod \"kube-proxy-gb5gd\" (UID: \"df309a2f-9258-4dc7-9a00-08e578333460\") " pod="kube-system/kube-proxy-gb5gd" Aug 12 23:58:15.923096 kubelet[2127]: I0812 23:58:15.922945 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjh7k\" (UniqueName: \"kubernetes.io/projected/df309a2f-9258-4dc7-9a00-08e578333460-kube-api-access-bjh7k\") pod \"kube-proxy-gb5gd\" (UID: \"df309a2f-9258-4dc7-9a00-08e578333460\") " pod="kube-system/kube-proxy-gb5gd" Aug 12 23:58:16.023345 kubelet[2127]: I0812 23:58:16.023290 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cdn5\" (UniqueName: \"kubernetes.io/projected/d1952f57-ed39-423f-87b8-acd414c26f3b-kube-api-access-7cdn5\") pod \"tigera-operator-5bf8dfcb4-95t4l\" (UID: \"d1952f57-ed39-423f-87b8-acd414c26f3b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-95t4l" Aug 12 23:58:16.023484 kubelet[2127]: I0812 23:58:16.023374 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1952f57-ed39-423f-87b8-acd414c26f3b-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-95t4l\" (UID: \"d1952f57-ed39-423f-87b8-acd414c26f3b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-95t4l" Aug 12 23:58:16.032104 kubelet[2127]: I0812 23:58:16.032053 2127 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 12 23:58:16.193612 kubelet[2127]: E0812 23:58:16.193500 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:16.194098 env[1323]: time="2025-08-12T23:58:16.194048198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gb5gd,Uid:df309a2f-9258-4dc7-9a00-08e578333460,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:16.210023 env[1323]: time="2025-08-12T23:58:16.209948195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:16.210023 env[1323]: time="2025-08-12T23:58:16.209989449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:16.210023 env[1323]: time="2025-08-12T23:58:16.209999972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:16.210214 env[1323]: time="2025-08-12T23:58:16.210112970Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa19fba0e4f610f1a83f1a7c0c30cfc11c6df83cceab80a556e708ac60d5b390 pid=2184 runtime=io.containerd.runc.v2 Aug 12 23:58:16.253316 env[1323]: time="2025-08-12T23:58:16.253274134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gb5gd,Uid:df309a2f-9258-4dc7-9a00-08e578333460,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa19fba0e4f610f1a83f1a7c0c30cfc11c6df83cceab80a556e708ac60d5b390\"" Aug 12 23:58:16.254210 kubelet[2127]: E0812 23:58:16.254183 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:16.257068 env[1323]: time="2025-08-12T23:58:16.256445466Z" level=info msg="CreateContainer within sandbox \"aa19fba0e4f610f1a83f1a7c0c30cfc11c6df83cceab80a556e708ac60d5b390\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:58:16.271524 env[1323]: time="2025-08-12T23:58:16.271459089Z" level=info msg="CreateContainer within sandbox \"aa19fba0e4f610f1a83f1a7c0c30cfc11c6df83cceab80a556e708ac60d5b390\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1c5493a9571a420998a996969d4162cc8f0b2ea651f423f08580f0a918505f1\"" Aug 12 23:58:16.272660 env[1323]: time="2025-08-12T23:58:16.272267077Z" level=info msg="StartContainer for \"c1c5493a9571a420998a996969d4162cc8f0b2ea651f423f08580f0a918505f1\"" Aug 12 23:58:16.307671 env[1323]: time="2025-08-12T23:58:16.307603645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-95t4l,Uid:d1952f57-ed39-423f-87b8-acd414c26f3b,Namespace:tigera-operator,Attempt:0,}" Aug 12 23:58:16.326723 env[1323]: time="2025-08-12T23:58:16.326610953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:16.326886 env[1323]: time="2025-08-12T23:58:16.326689819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:16.326886 env[1323]: time="2025-08-12T23:58:16.326700303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:16.326886 env[1323]: time="2025-08-12T23:58:16.326822503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5986e4aa48f6bb61ae286ad9bae6a195a578098cd585267564bf38bdfc0f259 pid=2248 runtime=io.containerd.runc.v2 Aug 12 23:58:16.337768 env[1323]: time="2025-08-12T23:58:16.337716719Z" level=info msg="StartContainer for \"c1c5493a9571a420998a996969d4162cc8f0b2ea651f423f08580f0a918505f1\" returns successfully" Aug 12 23:58:16.404224 env[1323]: time="2025-08-12T23:58:16.404183818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-95t4l,Uid:d1952f57-ed39-423f-87b8-acd414c26f3b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f5986e4aa48f6bb61ae286ad9bae6a195a578098cd585267564bf38bdfc0f259\"" Aug 12 23:58:16.406292 env[1323]: time="2025-08-12T23:58:16.406262028Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 12 23:58:16.546737 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 12 23:58:16.546841 kernel: audit: type=1325 audit(1755043096.544:231): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.544000 audit[2325]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.544000 audit[2325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb33f1c0 a2=0 a3=1 items=0 ppid=2235 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.550755 kernel: audit: type=1300 audit(1755043096.544:231): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb33f1c0 a2=0 a3=1 items=0 ppid=2235 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.550855 kernel: audit: type=1327 audit(1755043096.544:231): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 12 23:58:16.544000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 12 23:58:16.544000 audit[2326]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.554361 kernel: audit: type=1325 audit(1755043096.544:232): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.554447 kernel: audit: type=1300 audit(1755043096.544:232): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9f481a0 a2=0 a3=1 items=0 ppid=2235 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.544000 audit[2326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9f481a0 a2=0 a3=1 items=0 ppid=2235 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.559932 kernel: audit: type=1327 audit(1755043096.544:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 12 23:58:16.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 12 23:58:16.560182 kernel: audit: type=1325 audit(1755043096.547:233): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.547000 audit[2327]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.547000 audit[2327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb41a5d0 a2=0 a3=1 items=0 ppid=2235 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.566052 kernel: audit: type=1300 audit(1755043096.547:233): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb41a5d0 a2=0 a3=1 items=0 ppid=2235 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.566125 kernel: audit: type=1327 audit(1755043096.547:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 12 23:58:16.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 12 23:58:16.548000 audit[2328]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.574285 kernel: audit: type=1325 audit(1755043096.548:234): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.548000 audit[2328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc2a5440 a2=0 a3=1 items=0 ppid=2235 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 12 23:58:16.549000 audit[2329]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.549000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9b938c0 a2=0 a3=1 items=0 ppid=2235 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 12 23:58:16.551000 audit[2330]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.551000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda3c3920 a2=0 a3=1 items=0 ppid=2235 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 12 23:58:16.647000 audit[2331]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.647000 audit[2331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe8b079c0 a2=0 a3=1 items=0 ppid=2235 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 12 23:58:16.654000 audit[2333]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.654000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc0c28f00 a2=0 a3=1 items=0 ppid=2235 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 12 23:58:16.661000 audit[2336]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.661000 audit[2336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc0f66330 a2=0 a3=1 items=0 ppid=2235 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 12 23:58:16.662000 audit[2337]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.662000 audit[2337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb3a1bc0 a2=0 a3=1 items=0 ppid=2235 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 12 23:58:16.664000 audit[2339]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.664000 audit[2339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcee59900 a2=0 a3=1 items=0 ppid=2235 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 12 23:58:16.665000 audit[2340]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.665000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdfb6e010 a2=0 a3=1 items=0 ppid=2235 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 12 23:58:16.668000 audit[2342]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.668000 audit[2342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffed858bc0 a2=0 a3=1 items=0 ppid=2235 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 12 23:58:16.671000 audit[2345]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.671000 audit[2345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff00c7000 a2=0 a3=1 items=0 ppid=2235 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 12 23:58:16.672000 audit[2346]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.672000 audit[2346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9778c70 a2=0 a3=1 items=0 ppid=2235 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 12 23:58:16.676000 audit[2348]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.676000 audit[2348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc7589ed0 a2=0 a3=1 items=0 ppid=2235 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.676000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 12 23:58:16.677000 audit[2349]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.677000 audit[2349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd167c490 a2=0 a3=1 items=0 ppid=2235 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 12 23:58:16.680000 audit[2351]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.680000 audit[2351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb1759a0 a2=0 a3=1 items=0 ppid=2235 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.680000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 12 23:58:16.685000 audit[2354]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.685000 audit[2354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffed24d910 a2=0 a3=1 items=0 ppid=2235 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 12 23:58:16.689000 audit[2357]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.689000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff91841d0 a2=0 a3=1 items=0 ppid=2235 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 12 23:58:16.690000 audit[2358]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.690000 audit[2358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffccd30dc0 a2=0 a3=1 items=0 ppid=2235 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 12 23:58:16.710000 audit[2360]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.710000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffee4604a0 a2=0 a3=1 items=0 ppid=2235 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 12 23:58:16.713000 audit[2363]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.713000 audit[2363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc73bac60 a2=0 a3=1 items=0 ppid=2235 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 12 23:58:16.715000 audit[2364]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.715000 audit[2364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc383d020 a2=0 a3=1 items=0 ppid=2235 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 12 23:58:16.724000 audit[2366]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 12 23:58:16.724000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd270aa30 a2=0 a3=1 items=0 ppid=2235 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 12 23:58:16.727415 kubelet[2127]: E0812 23:58:16.727178 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:16.757000 audit[2372]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:16.757000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffea2c0d40 a2=0 a3=1 items=0 ppid=2235 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:16.773000 audit[2372]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:16.773000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffea2c0d40 a2=0 a3=1 items=0 ppid=2235 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:16.775000 audit[2377]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.775000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffde5a7b50 a2=0 a3=1 items=0 ppid=2235 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 12 23:58:16.777000 audit[2379]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.777000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe4d466c0 a2=0 a3=1 items=0 ppid=2235 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 12 23:58:16.781000 audit[2382]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.781000 audit[2382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffce530b80 a2=0 a3=1 items=0 ppid=2235 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 12 23:58:16.782000 audit[2383]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.782000 audit[2383]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4bd3560 a2=0 a3=1 items=0 ppid=2235 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 12 23:58:16.784000 audit[2385]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.784000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff3424a40 a2=0 a3=1 items=0 ppid=2235 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 12 23:58:16.785000 audit[2386]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.785000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7068610 a2=0 a3=1 items=0 ppid=2235 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 12 23:58:16.788000 audit[2388]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.788000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffca892330 a2=0 a3=1 items=0 ppid=2235 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 12 23:58:16.791000 audit[2391]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.791000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd50e1d00 a2=0 a3=1 items=0 ppid=2235 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 12 23:58:16.793000 audit[2392]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.793000 audit[2392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda24b020 a2=0 a3=1 items=0 ppid=2235 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 12 23:58:16.795000 audit[2394]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.795000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc3fa22e0 a2=0 a3=1 items=0 ppid=2235 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 12 23:58:16.796000 audit[2395]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.796000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7180c70 a2=0 a3=1 items=0 ppid=2235 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 12 23:58:16.802000 audit[2397]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.802000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc36cb750 a2=0 a3=1 items=0 ppid=2235 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 12 23:58:16.805000 audit[2400]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.805000 audit[2400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd80514a0 a2=0 a3=1 items=0 ppid=2235 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 12 23:58:16.809000 audit[2403]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.809000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff1d23530 a2=0 a3=1 items=0 ppid=2235 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 12 23:58:16.810000 audit[2404]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.810000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd38c18a0 a2=0 a3=1 items=0 ppid=2235 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 12 23:58:16.812000 audit[2406]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.812000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd14af620 a2=0 a3=1 items=0 ppid=2235 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 12 23:58:16.815000 audit[2409]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.815000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff47bbc60 a2=0 a3=1 items=0 ppid=2235 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.815000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 12 23:58:16.816000 audit[2410]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.816000 audit[2410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedebe550 a2=0 a3=1 items=0 ppid=2235 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 12 23:58:16.818000 audit[2412]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.818000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffad500e0 a2=0 a3=1 items=0 ppid=2235 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 12 23:58:16.819000 audit[2413]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.819000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef7a2440 a2=0 a3=1 items=0 ppid=2235 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 12 23:58:16.822000 audit[2415]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.822000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe2c3b1f0 a2=0 a3=1 items=0 ppid=2235 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 12 23:58:16.825000 audit[2418]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 12 23:58:16.825000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffa7f0ab0 a2=0 a3=1 items=0 ppid=2235 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 12 23:58:16.828000 audit[2420]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 12 23:58:16.828000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe51ddf80 a2=0 a3=1 items=0 ppid=2235 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.828000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:16.828000 audit[2420]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 12 23:58:16.828000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe51ddf80 a2=0 a3=1 items=0 ppid=2235 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:16.828000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:17.611664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730677575.mount: Deactivated successfully. Aug 12 23:58:18.006866 kubelet[2127]: E0812 23:58:18.006833 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:18.025376 kubelet[2127]: I0812 23:58:18.025324 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gb5gd" podStartSLOduration=3.025304998 podStartE2EDuration="3.025304998s" podCreationTimestamp="2025-08-12 23:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:58:16.74425084 +0000 UTC m=+6.141755591" watchObservedRunningTime="2025-08-12 23:58:18.025304998 +0000 UTC m=+7.422809749" Aug 12 23:58:18.733136 kubelet[2127]: E0812 23:58:18.732335 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:19.050703 env[1323]: time="2025-08-12T23:58:19.050651718Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:19.053053 env[1323]: time="2025-08-12T23:58:19.053013624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:19.054714 env[1323]: time="2025-08-12T23:58:19.054681254Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:19.056121 env[1323]: time="2025-08-12T23:58:19.056088891Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:19.057393 env[1323]: time="2025-08-12T23:58:19.057346206Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 12 23:58:19.060911 env[1323]: time="2025-08-12T23:58:19.060868880Z" level=info msg="CreateContainer within sandbox \"f5986e4aa48f6bb61ae286ad9bae6a195a578098cd585267564bf38bdfc0f259\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 12 23:58:19.075271 env[1323]: time="2025-08-12T23:58:19.075213566Z" level=info msg="CreateContainer within sandbox \"f5986e4aa48f6bb61ae286ad9bae6a195a578098cd585267564bf38bdfc0f259\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"48fba25e4c40508dd917844999b2bc5a9bdcfa7d78c8a5910c807e4fc7cd5b4f\"" Aug 12 23:58:19.077488 env[1323]: time="2025-08-12T23:58:19.076664495Z" level=info msg="StartContainer for \"48fba25e4c40508dd917844999b2bc5a9bdcfa7d78c8a5910c807e4fc7cd5b4f\"" Aug 12 23:58:19.153431 env[1323]: time="2025-08-12T23:58:19.153390338Z" level=info msg="StartContainer for \"48fba25e4c40508dd917844999b2bc5a9bdcfa7d78c8a5910c807e4fc7cd5b4f\" returns successfully" Aug 12 23:58:21.765175 kubelet[2127]: E0812 23:58:21.765137 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:21.776979 kubelet[2127]: I0812 23:58:21.776900 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-95t4l" podStartSLOduration=4.123574267 podStartE2EDuration="6.776881749s" podCreationTimestamp="2025-08-12 23:58:15 +0000 UTC" firstStartedPulling="2025-08-12 23:58:16.405534626 +0000 UTC m=+5.803039377" lastFinishedPulling="2025-08-12 23:58:19.058842108 +0000 UTC m=+8.456346859" observedRunningTime="2025-08-12 23:58:19.744400046 +0000 UTC m=+9.141904797" watchObservedRunningTime="2025-08-12 23:58:21.776881749 +0000 UTC m=+11.174386500" Aug 12 23:58:23.958273 kubelet[2127]: E0812 23:58:23.958230 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:23.984076 update_engine[1312]: I0812 23:58:23.984002 1312 update_attempter.cc:509] Updating boot flags... Aug 12 23:58:24.650611 sudo[1487]: pam_unix(sudo:session): session closed for user root Aug 12 23:58:24.656162 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 12 23:58:24.656251 kernel: audit: type=1106 audit(1755043104.650:282): pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:58:24.656276 kernel: audit: type=1104 audit(1755043104.650:283): pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:58:24.650000 audit[1487]: USER_END pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:58:24.650000 audit[1487]: CRED_DISP pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 12 23:58:24.674515 sshd[1481]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:24.674000 audit[1481]: USER_END pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:24.677340 systemd-logind[1309]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:58:24.677563 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:59016.service: Deactivated successfully. Aug 12 23:58:24.678396 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:58:24.678854 systemd-logind[1309]: Removed session 7. Aug 12 23:58:24.681958 kernel: audit: type=1106 audit(1755043104.674:284): pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:24.682042 kernel: audit: type=1104 audit(1755043104.674:285): pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:24.674000 audit[1481]: CRED_DISP pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:24.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.49:22-10.0.0.1:59016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:24.684538 kernel: audit: type=1131 audit(1755043104.674:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.49:22-10.0.0.1:59016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:25.468478 kernel: audit: type=1325 audit(1755043105.460:287): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:25.468610 kernel: audit: type=1300 audit(1755043105.460:287): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa253660 a2=0 a3=1 items=0 ppid=2235 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:25.468652 kernel: audit: type=1327 audit(1755043105.460:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:25.460000 audit[2528]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:25.460000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa253660 a2=0 a3=1 items=0 ppid=2235 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:25.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:25.479344 kernel: audit: type=1325 audit(1755043105.473:288): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:25.479461 kernel: audit: type=1300 audit(1755043105.473:288): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa253660 a2=0 a3=1 items=0 ppid=2235 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:25.473000 audit[2528]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:25.473000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa253660 a2=0 a3=1 items=0 ppid=2235 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:25.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:25.498000 audit[2531]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:25.498000 audit[2531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe1573980 a2=0 a3=1 items=0 ppid=2235 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:25.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:25.502000 audit[2531]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:25.502000 audit[2531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1573980 a2=0 a3=1 items=0 ppid=2235 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:25.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:28.649000 audit[2533]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:28.649000 audit[2533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffe234d30 a2=0 a3=1 items=0 ppid=2235 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:28.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:28.656000 audit[2533]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:28.656000 audit[2533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe234d30 a2=0 a3=1 items=0 ppid=2235 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:28.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:28.691000 audit[2535]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:28.691000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffcfc3bb0 a2=0 a3=1 items=0 ppid=2235 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:28.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:28.697000 audit[2535]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:28.697000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffcfc3bb0 a2=0 a3=1 items=0 ppid=2235 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:28.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:28.816585 kubelet[2127]: I0812 23:58:28.816540 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrsk8\" (UniqueName: \"kubernetes.io/projected/3ad4a1db-ff16-40b1-88d0-de978599df9b-kube-api-access-lrsk8\") pod \"calico-typha-fb8f8b8b6-kzmdg\" (UID: \"3ad4a1db-ff16-40b1-88d0-de978599df9b\") " pod="calico-system/calico-typha-fb8f8b8b6-kzmdg" Aug 12 23:58:28.817044 kubelet[2127]: I0812 23:58:28.816598 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ad4a1db-ff16-40b1-88d0-de978599df9b-typha-certs\") pod \"calico-typha-fb8f8b8b6-kzmdg\" (UID: \"3ad4a1db-ff16-40b1-88d0-de978599df9b\") " pod="calico-system/calico-typha-fb8f8b8b6-kzmdg" Aug 12 23:58:28.817044 kubelet[2127]: I0812 23:58:28.816621 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ad4a1db-ff16-40b1-88d0-de978599df9b-tigera-ca-bundle\") pod \"calico-typha-fb8f8b8b6-kzmdg\" (UID: \"3ad4a1db-ff16-40b1-88d0-de978599df9b\") " pod="calico-system/calico-typha-fb8f8b8b6-kzmdg" Aug 12 23:58:28.967143 kubelet[2127]: E0812 23:58:28.967019 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:28.967890 env[1323]: time="2025-08-12T23:58:28.967839667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb8f8b8b6-kzmdg,Uid:3ad4a1db-ff16-40b1-88d0-de978599df9b,Namespace:calico-system,Attempt:0,}" Aug 12 23:58:28.988830 env[1323]: time="2025-08-12T23:58:28.988381311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:28.988830 env[1323]: time="2025-08-12T23:58:28.988427400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:28.988830 env[1323]: time="2025-08-12T23:58:28.988438041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:28.988830 env[1323]: time="2025-08-12T23:58:28.988580947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52e12e4457bc4d69caeff5c2213fe0c2630e77eeafb1468b74ff1a6080549906 pid=2546 runtime=io.containerd.runc.v2 Aug 12 23:58:29.018093 kubelet[2127]: I0812 23:58:29.017979 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/45238ab2-8204-4035-8dad-19fd2e981b2e-cni-log-dir\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018093 kubelet[2127]: I0812 23:58:29.018029 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45238ab2-8204-4035-8dad-19fd2e981b2e-lib-modules\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018093 kubelet[2127]: I0812 23:58:29.018047 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/45238ab2-8204-4035-8dad-19fd2e981b2e-var-lib-calico\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018093 kubelet[2127]: I0812 23:58:29.018064 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7kd4\" (UniqueName: \"kubernetes.io/projected/45238ab2-8204-4035-8dad-19fd2e981b2e-kube-api-access-s7kd4\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018093 kubelet[2127]: I0812 23:58:29.018083 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/45238ab2-8204-4035-8dad-19fd2e981b2e-cni-net-dir\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018342 kubelet[2127]: I0812 23:58:29.018128 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/45238ab2-8204-4035-8dad-19fd2e981b2e-node-certs\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018342 kubelet[2127]: I0812 23:58:29.018169 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45238ab2-8204-4035-8dad-19fd2e981b2e-tigera-ca-bundle\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018342 kubelet[2127]: I0812 23:58:29.018222 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/45238ab2-8204-4035-8dad-19fd2e981b2e-var-run-calico\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018342 kubelet[2127]: I0812 23:58:29.018276 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45238ab2-8204-4035-8dad-19fd2e981b2e-xtables-lock\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018342 kubelet[2127]: I0812 23:58:29.018295 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/45238ab2-8204-4035-8dad-19fd2e981b2e-cni-bin-dir\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018522 kubelet[2127]: I0812 23:58:29.018311 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/45238ab2-8204-4035-8dad-19fd2e981b2e-flexvol-driver-host\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.018522 kubelet[2127]: I0812 23:58:29.018358 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/45238ab2-8204-4035-8dad-19fd2e981b2e-policysync\") pod \"calico-node-mlxzr\" (UID: \"45238ab2-8204-4035-8dad-19fd2e981b2e\") " pod="calico-system/calico-node-mlxzr" Aug 12 23:58:29.102131 env[1323]: time="2025-08-12T23:58:29.102089117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb8f8b8b6-kzmdg,Uid:3ad4a1db-ff16-40b1-88d0-de978599df9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"52e12e4457bc4d69caeff5c2213fe0c2630e77eeafb1468b74ff1a6080549906\"" Aug 12 23:58:29.102827 kubelet[2127]: E0812 23:58:29.102804 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:29.103844 env[1323]: time="2025-08-12T23:58:29.103815412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 12 23:58:29.121096 kubelet[2127]: E0812 23:58:29.119588 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.121096 kubelet[2127]: W0812 23:58:29.119637 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.121096 kubelet[2127]: E0812 23:58:29.119660 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.121096 kubelet[2127]: E0812 23:58:29.119851 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.121096 kubelet[2127]: W0812 23:58:29.119861 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.121096 kubelet[2127]: E0812 23:58:29.119870 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.121096 kubelet[2127]: E0812 23:58:29.120028 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.121096 kubelet[2127]: W0812 23:58:29.120035 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.121096 kubelet[2127]: E0812 23:58:29.120044 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.121096 kubelet[2127]: E0812 23:58:29.120199 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.123002 kubelet[2127]: W0812 23:58:29.120207 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.123002 kubelet[2127]: E0812 23:58:29.120215 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.123002 kubelet[2127]: E0812 23:58:29.120362 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.123002 kubelet[2127]: W0812 23:58:29.120375 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.123002 kubelet[2127]: E0812 23:58:29.120382 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.123002 kubelet[2127]: E0812 23:58:29.121376 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.123002 kubelet[2127]: W0812 23:58:29.121389 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.123002 kubelet[2127]: E0812 23:58:29.121403 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.123002 kubelet[2127]: E0812 23:58:29.122879 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.123002 kubelet[2127]: W0812 23:58:29.122895 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.123225 kubelet[2127]: E0812 23:58:29.122908 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.133836 kubelet[2127]: E0812 23:58:29.133806 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.133836 kubelet[2127]: W0812 23:58:29.133831 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.133836 kubelet[2127]: E0812 23:58:29.133851 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.266716 kubelet[2127]: E0812 23:58:29.266657 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzhss" podUID="3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc" Aug 12 23:58:29.277025 env[1323]: time="2025-08-12T23:58:29.276975975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mlxzr,Uid:45238ab2-8204-4035-8dad-19fd2e981b2e,Namespace:calico-system,Attempt:0,}" Aug 12 23:58:29.293719 env[1323]: time="2025-08-12T23:58:29.293329855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:29.293719 env[1323]: time="2025-08-12T23:58:29.293379223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:29.293719 env[1323]: time="2025-08-12T23:58:29.293389345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:29.295154 env[1323]: time="2025-08-12T23:58:29.295037587Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/438941b7b197342a6eb48acb0cd190d5dc121dc4f75f6051b6fffdae61c67d0a pid=2608 runtime=io.containerd.runc.v2 Aug 12 23:58:29.320355 kubelet[2127]: E0812 23:58:29.320317 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.320355 kubelet[2127]: W0812 23:58:29.320346 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.320355 kubelet[2127]: E0812 23:58:29.320367 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.320573 kubelet[2127]: E0812 23:58:29.320565 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.320604 kubelet[2127]: W0812 23:58:29.320575 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.320604 kubelet[2127]: E0812 23:58:29.320584 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.324476 kubelet[2127]: E0812 23:58:29.324015 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.324476 kubelet[2127]: W0812 23:58:29.324045 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.324476 kubelet[2127]: E0812 23:58:29.324075 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.328385 kubelet[2127]: E0812 23:58:29.328348 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.328549 kubelet[2127]: W0812 23:58:29.328520 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.328765 kubelet[2127]: E0812 23:58:29.328748 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.329565 kubelet[2127]: E0812 23:58:29.329541 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.329714 kubelet[2127]: W0812 23:58:29.329696 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.329812 kubelet[2127]: E0812 23:58:29.329797 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.330143 kubelet[2127]: E0812 23:58:29.330129 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.330244 kubelet[2127]: W0812 23:58:29.330231 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.330318 kubelet[2127]: E0812 23:58:29.330306 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.330703 kubelet[2127]: E0812 23:58:29.330681 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.330804 kubelet[2127]: W0812 23:58:29.330790 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.330874 kubelet[2127]: E0812 23:58:29.330862 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.331136 kubelet[2127]: E0812 23:58:29.331120 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.331219 kubelet[2127]: W0812 23:58:29.331206 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.331284 kubelet[2127]: E0812 23:58:29.331264 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.331556 kubelet[2127]: E0812 23:58:29.331540 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.331648 kubelet[2127]: W0812 23:58:29.331634 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.331714 kubelet[2127]: E0812 23:58:29.331702 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.331961 kubelet[2127]: E0812 23:58:29.331948 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.332055 kubelet[2127]: W0812 23:58:29.332042 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.332136 kubelet[2127]: E0812 23:58:29.332118 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.332368 kubelet[2127]: E0812 23:58:29.332355 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.332462 kubelet[2127]: W0812 23:58:29.332450 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.332536 kubelet[2127]: E0812 23:58:29.332509 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.332922 kubelet[2127]: E0812 23:58:29.332906 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.333017 kubelet[2127]: W0812 23:58:29.333003 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.333106 kubelet[2127]: E0812 23:58:29.333093 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.333492 kubelet[2127]: E0812 23:58:29.333474 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.333602 kubelet[2127]: W0812 23:58:29.333586 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.333679 kubelet[2127]: E0812 23:58:29.333667 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.333978 kubelet[2127]: E0812 23:58:29.333963 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.334072 kubelet[2127]: W0812 23:58:29.334058 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.334148 kubelet[2127]: E0812 23:58:29.334136 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.334479 kubelet[2127]: E0812 23:58:29.334465 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.334658 kubelet[2127]: W0812 23:58:29.334638 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.334739 kubelet[2127]: E0812 23:58:29.334727 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.335063 kubelet[2127]: E0812 23:58:29.335046 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.335167 kubelet[2127]: W0812 23:58:29.335153 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.335239 kubelet[2127]: E0812 23:58:29.335228 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.335521 kubelet[2127]: E0812 23:58:29.335506 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.335618 kubelet[2127]: W0812 23:58:29.335606 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.335725 kubelet[2127]: E0812 23:58:29.335711 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.335964 kubelet[2127]: E0812 23:58:29.335951 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.336037 kubelet[2127]: W0812 23:58:29.336023 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.336104 kubelet[2127]: E0812 23:58:29.336092 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.336340 kubelet[2127]: E0812 23:58:29.336327 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.336419 kubelet[2127]: W0812 23:58:29.336406 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.336476 kubelet[2127]: E0812 23:58:29.336464 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.336770 kubelet[2127]: E0812 23:58:29.336758 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.336845 kubelet[2127]: W0812 23:58:29.336832 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.336904 kubelet[2127]: E0812 23:58:29.336892 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.337204 kubelet[2127]: E0812 23:58:29.337191 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.337308 kubelet[2127]: W0812 23:58:29.337294 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.337369 kubelet[2127]: E0812 23:58:29.337358 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.337465 kubelet[2127]: I0812 23:58:29.337450 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc-kubelet-dir\") pod \"csi-node-driver-wzhss\" (UID: \"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc\") " pod="calico-system/csi-node-driver-wzhss" Aug 12 23:58:29.337718 kubelet[2127]: E0812 23:58:29.337700 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.337718 kubelet[2127]: W0812 23:58:29.337718 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.337809 kubelet[2127]: E0812 23:58:29.337736 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.337894 kubelet[2127]: E0812 23:58:29.337882 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.337894 kubelet[2127]: W0812 23:58:29.337893 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.337948 kubelet[2127]: E0812 23:58:29.337902 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.338062 kubelet[2127]: E0812 23:58:29.338051 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.338087 kubelet[2127]: W0812 23:58:29.338062 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.338087 kubelet[2127]: E0812 23:58:29.338070 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.338142 kubelet[2127]: I0812 23:58:29.338092 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc-varrun\") pod \"csi-node-driver-wzhss\" (UID: \"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc\") " pod="calico-system/csi-node-driver-wzhss" Aug 12 23:58:29.338257 kubelet[2127]: E0812 23:58:29.338246 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.338286 kubelet[2127]: W0812 23:58:29.338258 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.338286 kubelet[2127]: E0812 23:58:29.338267 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.338342 kubelet[2127]: I0812 23:58:29.338281 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc-registration-dir\") pod \"csi-node-driver-wzhss\" (UID: \"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc\") " pod="calico-system/csi-node-driver-wzhss" Aug 12 23:58:29.338447 kubelet[2127]: E0812 23:58:29.338436 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.338472 kubelet[2127]: W0812 23:58:29.338447 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.338472 kubelet[2127]: E0812 23:58:29.338457 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.338472 kubelet[2127]: I0812 23:58:29.338470 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc-socket-dir\") pod \"csi-node-driver-wzhss\" (UID: \"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc\") " pod="calico-system/csi-node-driver-wzhss" Aug 12 23:58:29.338652 kubelet[2127]: E0812 23:58:29.338640 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.338686 kubelet[2127]: W0812 23:58:29.338652 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.338686 kubelet[2127]: E0812 23:58:29.338661 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.338686 kubelet[2127]: I0812 23:58:29.338676 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw26k\" (UniqueName: \"kubernetes.io/projected/3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc-kube-api-access-cw26k\") pod \"csi-node-driver-wzhss\" (UID: \"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc\") " pod="calico-system/csi-node-driver-wzhss" Aug 12 23:58:29.339730 kubelet[2127]: E0812 23:58:29.339682 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.339730 kubelet[2127]: W0812 23:58:29.339700 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.339730 kubelet[2127]: E0812 23:58:29.339716 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.339880 kubelet[2127]: E0812 23:58:29.339865 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.339880 kubelet[2127]: W0812 23:58:29.339877 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.339938 kubelet[2127]: E0812 23:58:29.339886 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.340044 kubelet[2127]: E0812 23:58:29.340023 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.340044 kubelet[2127]: W0812 23:58:29.340034 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.340044 kubelet[2127]: E0812 23:58:29.340045 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.340202 kubelet[2127]: E0812 23:58:29.340183 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.340202 kubelet[2127]: W0812 23:58:29.340194 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.340268 kubelet[2127]: E0812 23:58:29.340203 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.340343 kubelet[2127]: E0812 23:58:29.340332 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.340343 kubelet[2127]: W0812 23:58:29.340341 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.340405 kubelet[2127]: E0812 23:58:29.340352 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.340496 kubelet[2127]: E0812 23:58:29.340480 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.340579 kubelet[2127]: W0812 23:58:29.340496 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.340579 kubelet[2127]: E0812 23:58:29.340508 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.340673 kubelet[2127]: E0812 23:58:29.340662 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.340702 kubelet[2127]: W0812 23:58:29.340673 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.340702 kubelet[2127]: E0812 23:58:29.340682 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.340821 kubelet[2127]: E0812 23:58:29.340810 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.340850 kubelet[2127]: W0812 23:58:29.340821 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.340850 kubelet[2127]: E0812 23:58:29.340829 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.362178 env[1323]: time="2025-08-12T23:58:29.362102628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mlxzr,Uid:45238ab2-8204-4035-8dad-19fd2e981b2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"438941b7b197342a6eb48acb0cd190d5dc121dc4f75f6051b6fffdae61c67d0a\"" Aug 12 23:58:29.440011 kubelet[2127]: E0812 23:58:29.439960 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.440011 kubelet[2127]: W0812 23:58:29.439998 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.440177 kubelet[2127]: E0812 23:58:29.440022 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.440328 kubelet[2127]: E0812 23:58:29.440305 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.440600 kubelet[2127]: W0812 23:58:29.440335 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.440682 kubelet[2127]: E0812 23:58:29.440610 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.440888 kubelet[2127]: E0812 23:58:29.440876 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.440933 kubelet[2127]: W0812 23:58:29.440889 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.440933 kubelet[2127]: E0812 23:58:29.440900 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.441086 kubelet[2127]: E0812 23:58:29.441063 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.441086 kubelet[2127]: W0812 23:58:29.441075 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.441086 kubelet[2127]: E0812 23:58:29.441085 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.441348 kubelet[2127]: E0812 23:58:29.441327 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.441348 kubelet[2127]: W0812 23:58:29.441340 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.441426 kubelet[2127]: E0812 23:58:29.441351 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.441583 kubelet[2127]: E0812 23:58:29.441565 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.441583 kubelet[2127]: W0812 23:58:29.441577 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.441679 kubelet[2127]: E0812 23:58:29.441654 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.441773 kubelet[2127]: E0812 23:58:29.441756 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.441773 kubelet[2127]: W0812 23:58:29.441767 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.441841 kubelet[2127]: E0812 23:58:29.441821 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.441938 kubelet[2127]: E0812 23:58:29.441921 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.441987 kubelet[2127]: W0812 23:58:29.441932 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.441987 kubelet[2127]: E0812 23:58:29.441959 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.442129 kubelet[2127]: E0812 23:58:29.442114 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.442129 kubelet[2127]: W0812 23:58:29.442125 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.442238 kubelet[2127]: E0812 23:58:29.442165 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.442290 kubelet[2127]: E0812 23:58:29.442275 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.442290 kubelet[2127]: W0812 23:58:29.442284 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.442342 kubelet[2127]: E0812 23:58:29.442318 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.442446 kubelet[2127]: E0812 23:58:29.442428 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.442446 kubelet[2127]: W0812 23:58:29.442442 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.442523 kubelet[2127]: E0812 23:58:29.442479 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.442606 kubelet[2127]: E0812 23:58:29.442593 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.442606 kubelet[2127]: W0812 23:58:29.442603 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.442682 kubelet[2127]: E0812 23:58:29.442616 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.442781 kubelet[2127]: E0812 23:58:29.442764 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.442781 kubelet[2127]: W0812 23:58:29.442776 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.442855 kubelet[2127]: E0812 23:58:29.442801 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.443057 kubelet[2127]: E0812 23:58:29.443028 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.443057 kubelet[2127]: W0812 23:58:29.443049 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.443126 kubelet[2127]: E0812 23:58:29.443086 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.443241 kubelet[2127]: E0812 23:58:29.443210 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.443241 kubelet[2127]: W0812 23:58:29.443222 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.443313 kubelet[2127]: E0812 23:58:29.443257 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.443403 kubelet[2127]: E0812 23:58:29.443377 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.443403 kubelet[2127]: W0812 23:58:29.443387 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.443476 kubelet[2127]: E0812 23:58:29.443423 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.443567 kubelet[2127]: E0812 23:58:29.443532 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.443567 kubelet[2127]: W0812 23:58:29.443542 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.443654 kubelet[2127]: E0812 23:58:29.443604 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.443749 kubelet[2127]: E0812 23:58:29.443721 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.443749 kubelet[2127]: W0812 23:58:29.443733 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.443817 kubelet[2127]: E0812 23:58:29.443771 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.443924 kubelet[2127]: E0812 23:58:29.443892 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.443924 kubelet[2127]: W0812 23:58:29.443902 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.443924 kubelet[2127]: E0812 23:58:29.443915 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.444142 kubelet[2127]: E0812 23:58:29.444099 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.444142 kubelet[2127]: W0812 23:58:29.444120 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.444142 kubelet[2127]: E0812 23:58:29.444138 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.445586 kubelet[2127]: E0812 23:58:29.444323 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.445586 kubelet[2127]: W0812 23:58:29.444336 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.445586 kubelet[2127]: E0812 23:58:29.444361 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.445586 kubelet[2127]: E0812 23:58:29.444849 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.445586 kubelet[2127]: W0812 23:58:29.444861 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.445586 kubelet[2127]: E0812 23:58:29.444873 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.445586 kubelet[2127]: E0812 23:58:29.445324 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.445586 kubelet[2127]: W0812 23:58:29.445336 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.445851 kubelet[2127]: E0812 23:58:29.445599 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.447324 kubelet[2127]: E0812 23:58:29.447286 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.447324 kubelet[2127]: W0812 23:58:29.447310 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.447451 kubelet[2127]: E0812 23:58:29.447334 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.448588 kubelet[2127]: E0812 23:58:29.447597 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.448588 kubelet[2127]: W0812 23:58:29.447610 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.448588 kubelet[2127]: E0812 23:58:29.447620 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.459001 kubelet[2127]: E0812 23:58:29.458966 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:29.459001 kubelet[2127]: W0812 23:58:29.458990 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:29.459001 kubelet[2127]: E0812 23:58:29.459008 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:29.714000 audit[2706]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:29.716936 kernel: kauditd_printk_skb: 19 callbacks suppressed Aug 12 23:58:29.717000 kernel: audit: type=1325 audit(1755043109.714:295): table=filter:97 family=2 entries=20 op=nft_register_rule pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:29.717021 kernel: audit: type=1300 audit(1755043109.714:295): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe7fdc5d0 a2=0 a3=1 items=0 ppid=2235 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:29.714000 audit[2706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe7fdc5d0 a2=0 a3=1 items=0 ppid=2235 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:29.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:29.723929 kernel: audit: type=1327 audit(1755043109.714:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:29.727000 audit[2706]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:29.727000 audit[2706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7fdc5d0 a2=0 a3=1 items=0 ppid=2235 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:29.732767 kernel: audit: type=1325 audit(1755043109.727:296): table=nat:98 family=2 entries=12 op=nft_register_rule pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:29.732817 kernel: audit: type=1300 audit(1755043109.727:296): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7fdc5d0 a2=0 a3=1 items=0 ppid=2235 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:29.732834 kernel: audit: type=1327 audit(1755043109.727:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:29.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:29.926219 systemd[1]: run-containerd-runc-k8s.io-52e12e4457bc4d69caeff5c2213fe0c2630e77eeafb1468b74ff1a6080549906-runc.xADREy.mount: Deactivated successfully. Aug 12 23:58:30.145261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102013378.mount: Deactivated successfully. Aug 12 23:58:30.707037 kubelet[2127]: E0812 23:58:30.706988 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzhss" podUID="3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc" Aug 12 23:58:30.841676 env[1323]: time="2025-08-12T23:58:30.841611435Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:30.843180 env[1323]: time="2025-08-12T23:58:30.843137165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:30.844707 env[1323]: time="2025-08-12T23:58:30.844670456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:30.847672 env[1323]: time="2025-08-12T23:58:30.847609937Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:30.848071 env[1323]: time="2025-08-12T23:58:30.848039287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 12 23:58:30.849515 env[1323]: time="2025-08-12T23:58:30.849491164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 12 23:58:30.862221 env[1323]: time="2025-08-12T23:58:30.862168317Z" level=info msg="CreateContainer within sandbox \"52e12e4457bc4d69caeff5c2213fe0c2630e77eeafb1468b74ff1a6080549906\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 12 23:58:30.879248 env[1323]: time="2025-08-12T23:58:30.879189301Z" level=info msg="CreateContainer within sandbox \"52e12e4457bc4d69caeff5c2213fe0c2630e77eeafb1468b74ff1a6080549906\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e257978c8b3bc8a377d9d06669db8ade336a4ae3f30c146f3410d5cfd6213165\"" Aug 12 23:58:30.879739 env[1323]: time="2025-08-12T23:58:30.879711667Z" level=info msg="StartContainer for \"e257978c8b3bc8a377d9d06669db8ade336a4ae3f30c146f3410d5cfd6213165\"" Aug 12 23:58:31.009353 env[1323]: time="2025-08-12T23:58:31.009297677Z" level=info msg="StartContainer for \"e257978c8b3bc8a377d9d06669db8ade336a4ae3f30c146f3410d5cfd6213165\" returns successfully" Aug 12 23:58:31.767661 kubelet[2127]: E0812 23:58:31.767609 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:31.853664 kubelet[2127]: E0812 23:58:31.853615 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.853664 kubelet[2127]: W0812 23:58:31.853657 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.853891 kubelet[2127]: E0812 23:58:31.853677 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.853891 kubelet[2127]: E0812 23:58:31.853852 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.853891 kubelet[2127]: W0812 23:58:31.853860 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.853891 kubelet[2127]: E0812 23:58:31.853870 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.857211 kubelet[2127]: E0812 23:58:31.857192 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.857308 kubelet[2127]: W0812 23:58:31.857293 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.857374 kubelet[2127]: E0812 23:58:31.857362 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.857672 kubelet[2127]: E0812 23:58:31.857614 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.857809 kubelet[2127]: W0812 23:58:31.857793 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.857879 kubelet[2127]: E0812 23:58:31.857868 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.858157 kubelet[2127]: E0812 23:58:31.858143 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.858249 kubelet[2127]: W0812 23:58:31.858236 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.858313 kubelet[2127]: E0812 23:58:31.858302 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.858521 kubelet[2127]: E0812 23:58:31.858510 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.858601 kubelet[2127]: W0812 23:58:31.858589 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.858681 kubelet[2127]: E0812 23:58:31.858669 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.858882 kubelet[2127]: E0812 23:58:31.858870 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.858967 kubelet[2127]: W0812 23:58:31.858954 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.859034 kubelet[2127]: E0812 23:58:31.859023 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.859234 kubelet[2127]: E0812 23:58:31.859222 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.859313 kubelet[2127]: W0812 23:58:31.859301 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.860200 kubelet[2127]: E0812 23:58:31.860182 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.861437 kubelet[2127]: E0812 23:58:31.861305 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.861437 kubelet[2127]: W0812 23:58:31.861321 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.861437 kubelet[2127]: E0812 23:58:31.861335 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.861742 kubelet[2127]: E0812 23:58:31.861706 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.861742 kubelet[2127]: W0812 23:58:31.861726 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.861742 kubelet[2127]: E0812 23:58:31.861740 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.862015 kubelet[2127]: E0812 23:58:31.861988 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.862015 kubelet[2127]: W0812 23:58:31.862004 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.862015 kubelet[2127]: E0812 23:58:31.862014 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.862204 kubelet[2127]: E0812 23:58:31.862162 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.862204 kubelet[2127]: W0812 23:58:31.862174 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.862204 kubelet[2127]: E0812 23:58:31.862183 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.862410 kubelet[2127]: E0812 23:58:31.862387 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.862410 kubelet[2127]: W0812 23:58:31.862399 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.862410 kubelet[2127]: E0812 23:58:31.862408 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.862550 kubelet[2127]: E0812 23:58:31.862540 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.862579 kubelet[2127]: W0812 23:58:31.862552 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.862579 kubelet[2127]: E0812 23:58:31.862560 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.862754 kubelet[2127]: E0812 23:58:31.862735 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.862754 kubelet[2127]: W0812 23:58:31.862746 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.862754 kubelet[2127]: E0812 23:58:31.862754 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.862995 kubelet[2127]: E0812 23:58:31.862983 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.862995 kubelet[2127]: W0812 23:58:31.862993 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.863074 kubelet[2127]: E0812 23:58:31.863002 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.863203 kubelet[2127]: E0812 23:58:31.863191 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.863203 kubelet[2127]: W0812 23:58:31.863202 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.863263 kubelet[2127]: E0812 23:58:31.863215 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.863400 kubelet[2127]: E0812 23:58:31.863390 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.863464 kubelet[2127]: W0812 23:58:31.863400 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.863464 kubelet[2127]: E0812 23:58:31.863411 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.863579 kubelet[2127]: E0812 23:58:31.863568 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.863579 kubelet[2127]: W0812 23:58:31.863578 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.863643 kubelet[2127]: E0812 23:58:31.863589 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.863756 kubelet[2127]: E0812 23:58:31.863745 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.863756 kubelet[2127]: W0812 23:58:31.863755 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.863814 kubelet[2127]: E0812 23:58:31.863770 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.863919 kubelet[2127]: E0812 23:58:31.863910 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.863956 kubelet[2127]: W0812 23:58:31.863919 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.863956 kubelet[2127]: E0812 23:58:31.863939 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.864170 kubelet[2127]: E0812 23:58:31.864158 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.864170 kubelet[2127]: W0812 23:58:31.864169 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.864236 kubelet[2127]: E0812 23:58:31.864186 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.864435 kubelet[2127]: E0812 23:58:31.864419 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.864475 kubelet[2127]: W0812 23:58:31.864435 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.864475 kubelet[2127]: E0812 23:58:31.864451 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.864611 kubelet[2127]: E0812 23:58:31.864600 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.864611 kubelet[2127]: W0812 23:58:31.864610 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.864704 kubelet[2127]: E0812 23:58:31.864670 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.864778 kubelet[2127]: E0812 23:58:31.864767 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.864804 kubelet[2127]: W0812 23:58:31.864777 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.864804 kubelet[2127]: E0812 23:58:31.864797 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.864944 kubelet[2127]: E0812 23:58:31.864928 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.864980 kubelet[2127]: W0812 23:58:31.864944 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.864980 kubelet[2127]: E0812 23:58:31.864959 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.865120 kubelet[2127]: E0812 23:58:31.865109 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.865150 kubelet[2127]: W0812 23:58:31.865120 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.865150 kubelet[2127]: E0812 23:58:31.865132 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.865318 kubelet[2127]: E0812 23:58:31.865307 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.865318 kubelet[2127]: W0812 23:58:31.865317 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.865396 kubelet[2127]: E0812 23:58:31.865330 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.865583 kubelet[2127]: E0812 23:58:31.865565 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.865644 kubelet[2127]: W0812 23:58:31.865582 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.865644 kubelet[2127]: E0812 23:58:31.865594 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.865777 kubelet[2127]: E0812 23:58:31.865762 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.865777 kubelet[2127]: W0812 23:58:31.865776 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.865851 kubelet[2127]: E0812 23:58:31.865791 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.866002 kubelet[2127]: E0812 23:58:31.865990 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.866046 kubelet[2127]: W0812 23:58:31.866003 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.866046 kubelet[2127]: E0812 23:58:31.866016 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.866950 kubelet[2127]: E0812 23:58:31.866917 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.866950 kubelet[2127]: W0812 23:58:31.866944 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.867051 kubelet[2127]: E0812 23:58:31.866959 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.867248 kubelet[2127]: E0812 23:58:31.867227 2127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:58:31.867248 kubelet[2127]: W0812 23:58:31.867245 2127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:58:31.867330 kubelet[2127]: E0812 23:58:31.867257 2127 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:58:31.947891 env[1323]: time="2025-08-12T23:58:31.947841924Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:31.949886 env[1323]: time="2025-08-12T23:58:31.949841677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:31.951696 env[1323]: time="2025-08-12T23:58:31.951661722Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:31.953530 env[1323]: time="2025-08-12T23:58:31.953492488Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:31.954051 env[1323]: time="2025-08-12T23:58:31.954010089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 12 23:58:31.962422 env[1323]: time="2025-08-12T23:58:31.962347873Z" level=info msg="CreateContainer within sandbox \"438941b7b197342a6eb48acb0cd190d5dc121dc4f75f6051b6fffdae61c67d0a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 12 23:58:32.009491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091279062.mount: Deactivated successfully. Aug 12 23:58:32.015318 env[1323]: time="2025-08-12T23:58:32.015269450Z" level=info msg="CreateContainer within sandbox \"438941b7b197342a6eb48acb0cd190d5dc121dc4f75f6051b6fffdae61c67d0a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"20e09718ea53d66197b9259db1e2aafbb24a857031d29b4d628b082d5b8ba24b\"" Aug 12 23:58:32.021664 env[1323]: time="2025-08-12T23:58:32.019912945Z" level=info msg="StartContainer for \"20e09718ea53d66197b9259db1e2aafbb24a857031d29b4d628b082d5b8ba24b\"" Aug 12 23:58:32.102501 env[1323]: time="2025-08-12T23:58:32.102440616Z" level=info msg="StartContainer for \"20e09718ea53d66197b9259db1e2aafbb24a857031d29b4d628b082d5b8ba24b\" returns successfully" Aug 12 23:58:32.145683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20e09718ea53d66197b9259db1e2aafbb24a857031d29b4d628b082d5b8ba24b-rootfs.mount: Deactivated successfully. Aug 12 23:58:32.160852 env[1323]: time="2025-08-12T23:58:32.160723179Z" level=info msg="shim disconnected" id=20e09718ea53d66197b9259db1e2aafbb24a857031d29b4d628b082d5b8ba24b Aug 12 23:58:32.160852 env[1323]: time="2025-08-12T23:58:32.160768785Z" level=warning msg="cleaning up after shim disconnected" id=20e09718ea53d66197b9259db1e2aafbb24a857031d29b4d628b082d5b8ba24b namespace=k8s.io Aug 12 23:58:32.160852 env[1323]: time="2025-08-12T23:58:32.160784788Z" level=info msg="cleaning up dead shim" Aug 12 23:58:32.178344 env[1323]: time="2025-08-12T23:58:32.178294928Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:58:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2831 runtime=io.containerd.runc.v2\n" Aug 12 23:58:32.708602 kubelet[2127]: E0812 23:58:32.708546 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzhss" podUID="3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc" Aug 12 23:58:32.761024 kubelet[2127]: I0812 23:58:32.760988 2127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:58:32.761313 kubelet[2127]: E0812 23:58:32.761290 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:32.761886 env[1323]: time="2025-08-12T23:58:32.761845422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 12 23:58:32.783439 kubelet[2127]: I0812 23:58:32.783379 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fb8f8b8b6-kzmdg" podStartSLOduration=3.037563947 podStartE2EDuration="4.783363282s" podCreationTimestamp="2025-08-12 23:58:28 +0000 UTC" firstStartedPulling="2025-08-12 23:58:29.1035134 +0000 UTC m=+18.501018151" lastFinishedPulling="2025-08-12 23:58:30.849312735 +0000 UTC m=+20.246817486" observedRunningTime="2025-08-12 23:58:31.814392136 +0000 UTC m=+21.211896847" watchObservedRunningTime="2025-08-12 23:58:32.783363282 +0000 UTC m=+22.180868033" Aug 12 23:58:34.706852 kubelet[2127]: E0812 23:58:34.706799 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzhss" podUID="3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc" Aug 12 23:58:35.319650 env[1323]: time="2025-08-12T23:58:35.319597147Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:35.321115 env[1323]: time="2025-08-12T23:58:35.321088824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:35.322506 env[1323]: time="2025-08-12T23:58:35.322482248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:35.324225 env[1323]: time="2025-08-12T23:58:35.324198674Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:35.324912 env[1323]: time="2025-08-12T23:58:35.324884285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 12 23:58:35.327444 env[1323]: time="2025-08-12T23:58:35.327399217Z" level=info msg="CreateContainer within sandbox \"438941b7b197342a6eb48acb0cd190d5dc121dc4f75f6051b6fffdae61c67d0a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 12 23:58:35.340722 env[1323]: time="2025-08-12T23:58:35.340622521Z" level=info msg="CreateContainer within sandbox \"438941b7b197342a6eb48acb0cd190d5dc121dc4f75f6051b6fffdae61c67d0a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6c56c775074187ba9ee7d8bf7f78b8671e828cb2eeebbf7fa2ebec804601d8a6\"" Aug 12 23:58:35.341523 env[1323]: time="2025-08-12T23:58:35.341485435Z" level=info msg="StartContainer for \"6c56c775074187ba9ee7d8bf7f78b8671e828cb2eeebbf7fa2ebec804601d8a6\"" Aug 12 23:58:35.493905 env[1323]: time="2025-08-12T23:58:35.493843094Z" level=info msg="StartContainer for \"6c56c775074187ba9ee7d8bf7f78b8671e828cb2eeebbf7fa2ebec804601d8a6\" returns successfully" Aug 12 23:58:36.066867 env[1323]: time="2025-08-12T23:58:36.066810935Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:58:36.091066 env[1323]: time="2025-08-12T23:58:36.091014442Z" level=info msg="shim disconnected" id=6c56c775074187ba9ee7d8bf7f78b8671e828cb2eeebbf7fa2ebec804601d8a6 Aug 12 23:58:36.091066 env[1323]: time="2025-08-12T23:58:36.091060488Z" level=warning msg="cleaning up after shim disconnected" id=6c56c775074187ba9ee7d8bf7f78b8671e828cb2eeebbf7fa2ebec804601d8a6 namespace=k8s.io Aug 12 23:58:36.091066 env[1323]: time="2025-08-12T23:58:36.091070489Z" level=info msg="cleaning up dead shim" Aug 12 23:58:36.097967 env[1323]: time="2025-08-12T23:58:36.097898114Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:58:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2904 runtime=io.containerd.runc.v2\n" Aug 12 23:58:36.170237 kubelet[2127]: I0812 23:58:36.169726 2127 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 12 23:58:36.296822 kubelet[2127]: I0812 23:58:36.296775 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xp4r\" (UniqueName: \"kubernetes.io/projected/d9c82a29-832e-4f27-bc43-e1ba46fc34e5-kube-api-access-7xp4r\") pod \"calico-kube-controllers-645c974fb8-zw4bc\" (UID: \"d9c82a29-832e-4f27-bc43-e1ba46fc34e5\") " pod="calico-system/calico-kube-controllers-645c974fb8-zw4bc" Aug 12 23:58:36.296822 kubelet[2127]: I0812 23:58:36.296821 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9c82a29-832e-4f27-bc43-e1ba46fc34e5-tigera-ca-bundle\") pod \"calico-kube-controllers-645c974fb8-zw4bc\" (UID: \"d9c82a29-832e-4f27-bc43-e1ba46fc34e5\") " pod="calico-system/calico-kube-controllers-645c974fb8-zw4bc" Aug 12 23:58:36.338460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c56c775074187ba9ee7d8bf7f78b8671e828cb2eeebbf7fa2ebec804601d8a6-rootfs.mount: Deactivated successfully. Aug 12 23:58:36.398835 kubelet[2127]: I0812 23:58:36.398778 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bec8aa18-c5b5-455b-98e4-ec5961682033-whisker-backend-key-pair\") pod \"whisker-6f6f6d7688-8r2qj\" (UID: \"bec8aa18-c5b5-455b-98e4-ec5961682033\") " pod="calico-system/whisker-6f6f6d7688-8r2qj" Aug 12 23:58:36.398835 kubelet[2127]: I0812 23:58:36.398831 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrzv\" (UniqueName: \"kubernetes.io/projected/bec8aa18-c5b5-455b-98e4-ec5961682033-kube-api-access-kzrzv\") pod \"whisker-6f6f6d7688-8r2qj\" (UID: \"bec8aa18-c5b5-455b-98e4-ec5961682033\") " pod="calico-system/whisker-6f6f6d7688-8r2qj" Aug 12 23:58:36.399037 kubelet[2127]: I0812 23:58:36.398854 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8-config\") pod \"goldmane-58fd7646b9-h599k\" (UID: \"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8\") " pod="calico-system/goldmane-58fd7646b9-h599k" Aug 12 23:58:36.399037 kubelet[2127]: I0812 23:58:36.398921 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ztz9\" (UniqueName: \"kubernetes.io/projected/fe2dc283-a072-4194-b3a7-efdf3c371b0b-kube-api-access-4ztz9\") pod \"calico-apiserver-54ddd56b5-bk8rz\" (UID: \"fe2dc283-a072-4194-b3a7-efdf3c371b0b\") " pod="calico-apiserver/calico-apiserver-54ddd56b5-bk8rz" Aug 12 23:58:36.399037 kubelet[2127]: I0812 23:58:36.398951 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5lp6\" (UniqueName: \"kubernetes.io/projected/3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8-kube-api-access-m5lp6\") pod \"goldmane-58fd7646b9-h599k\" (UID: \"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8\") " pod="calico-system/goldmane-58fd7646b9-h599k" Aug 12 23:58:36.399037 kubelet[2127]: I0812 23:58:36.398970 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4410b28d-7c64-4d83-b0dc-3486564fba4c-calico-apiserver-certs\") pod \"calico-apiserver-54ddd56b5-vlgsf\" (UID: \"4410b28d-7c64-4d83-b0dc-3486564fba4c\") " pod="calico-apiserver/calico-apiserver-54ddd56b5-vlgsf" Aug 12 23:58:36.399037 kubelet[2127]: I0812 23:58:36.398987 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2k6\" (UniqueName: \"kubernetes.io/projected/9dc08e5e-ae34-4c36-9f26-39270357d1c4-kube-api-access-4q2k6\") pod \"coredns-7c65d6cfc9-mxcdd\" (UID: \"9dc08e5e-ae34-4c36-9f26-39270357d1c4\") " pod="kube-system/coredns-7c65d6cfc9-mxcdd" Aug 12 23:58:36.399168 kubelet[2127]: I0812 23:58:36.399008 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d94074-07d3-4e8f-bed7-18c1079c94eb-config-volume\") pod \"coredns-7c65d6cfc9-66lv2\" (UID: \"e2d94074-07d3-4e8f-bed7-18c1079c94eb\") " pod="kube-system/coredns-7c65d6cfc9-66lv2" Aug 12 23:58:36.399168 kubelet[2127]: I0812 23:58:36.399026 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc08e5e-ae34-4c36-9f26-39270357d1c4-config-volume\") pod \"coredns-7c65d6cfc9-mxcdd\" (UID: \"9dc08e5e-ae34-4c36-9f26-39270357d1c4\") " pod="kube-system/coredns-7c65d6cfc9-mxcdd" Aug 12 23:58:36.399168 kubelet[2127]: I0812 23:58:36.399043 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-h599k\" (UID: \"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8\") " pod="calico-system/goldmane-58fd7646b9-h599k" Aug 12 23:58:36.399168 kubelet[2127]: I0812 23:58:36.399076 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bec8aa18-c5b5-455b-98e4-ec5961682033-whisker-ca-bundle\") pod \"whisker-6f6f6d7688-8r2qj\" (UID: \"bec8aa18-c5b5-455b-98e4-ec5961682033\") " pod="calico-system/whisker-6f6f6d7688-8r2qj" Aug 12 23:58:36.399168 kubelet[2127]: I0812 23:58:36.399093 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54d6g\" (UniqueName: \"kubernetes.io/projected/e2d94074-07d3-4e8f-bed7-18c1079c94eb-kube-api-access-54d6g\") pod \"coredns-7c65d6cfc9-66lv2\" (UID: \"e2d94074-07d3-4e8f-bed7-18c1079c94eb\") " pod="kube-system/coredns-7c65d6cfc9-66lv2" Aug 12 23:58:36.399292 kubelet[2127]: I0812 23:58:36.399111 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fe2dc283-a072-4194-b3a7-efdf3c371b0b-calico-apiserver-certs\") pod \"calico-apiserver-54ddd56b5-bk8rz\" (UID: \"fe2dc283-a072-4194-b3a7-efdf3c371b0b\") " pod="calico-apiserver/calico-apiserver-54ddd56b5-bk8rz" Aug 12 23:58:36.399292 kubelet[2127]: I0812 23:58:36.399127 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8-goldmane-key-pair\") pod \"goldmane-58fd7646b9-h599k\" (UID: \"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8\") " pod="calico-system/goldmane-58fd7646b9-h599k" Aug 12 23:58:36.399292 kubelet[2127]: I0812 23:58:36.399144 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5tsk\" (UniqueName: \"kubernetes.io/projected/4410b28d-7c64-4d83-b0dc-3486564fba4c-kube-api-access-j5tsk\") pod \"calico-apiserver-54ddd56b5-vlgsf\" (UID: \"4410b28d-7c64-4d83-b0dc-3486564fba4c\") " pod="calico-apiserver/calico-apiserver-54ddd56b5-vlgsf" Aug 12 23:58:36.512271 env[1323]: time="2025-08-12T23:58:36.511904380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645c974fb8-zw4bc,Uid:d9c82a29-832e-4f27-bc43-e1ba46fc34e5,Namespace:calico-system,Attempt:0,}" Aug 12 23:58:36.525594 env[1323]: time="2025-08-12T23:58:36.525543149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54ddd56b5-bk8rz,Uid:fe2dc283-a072-4194-b3a7-efdf3c371b0b,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:58:36.527941 env[1323]: time="2025-08-12T23:58:36.527727306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f6d7688-8r2qj,Uid:bec8aa18-c5b5-455b-98e4-ec5961682033,Namespace:calico-system,Attempt:0,}" Aug 12 23:58:36.708696 env[1323]: time="2025-08-12T23:58:36.708003472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzhss,Uid:3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc,Namespace:calico-system,Attempt:0,}" Aug 12 23:58:36.770499 env[1323]: time="2025-08-12T23:58:36.770416381Z" level=error msg="Failed to destroy network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.771611 env[1323]: time="2025-08-12T23:58:36.771509640Z" level=error msg="encountered an error cleaning up failed sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.772206 env[1323]: time="2025-08-12T23:58:36.772172244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645c974fb8-zw4bc,Uid:d9c82a29-832e-4f27-bc43-e1ba46fc34e5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.772549 kubelet[2127]: E0812 23:58:36.772509 2127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.772677 kubelet[2127]: E0812 23:58:36.772571 2127 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645c974fb8-zw4bc" Aug 12 23:58:36.772677 kubelet[2127]: E0812 23:58:36.772592 2127 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645c974fb8-zw4bc" Aug 12 23:58:36.772677 kubelet[2127]: E0812 23:58:36.772660 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-645c974fb8-zw4bc_calico-system(d9c82a29-832e-4f27-bc43-e1ba46fc34e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-645c974fb8-zw4bc_calico-system(d9c82a29-832e-4f27-bc43-e1ba46fc34e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-645c974fb8-zw4bc" podUID="d9c82a29-832e-4f27-bc43-e1ba46fc34e5" Aug 12 23:58:36.774693 env[1323]: time="2025-08-12T23:58:36.774662479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 12 23:58:36.777284 env[1323]: time="2025-08-12T23:58:36.777244446Z" level=error msg="Failed to destroy network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.777777 env[1323]: time="2025-08-12T23:58:36.777741989Z" level=error msg="encountered an error cleaning up failed sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.779008 env[1323]: time="2025-08-12T23:58:36.778723474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54ddd56b5-bk8rz,Uid:fe2dc283-a072-4194-b3a7-efdf3c371b0b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.779544 kubelet[2127]: E0812 23:58:36.779484 2127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.783077 kubelet[2127]: E0812 23:58:36.782741 2127 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54ddd56b5-bk8rz" Aug 12 23:58:36.783077 kubelet[2127]: E0812 23:58:36.782775 2127 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54ddd56b5-bk8rz" Aug 12 23:58:36.783077 kubelet[2127]: E0812 23:58:36.782811 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54ddd56b5-bk8rz_calico-apiserver(fe2dc283-a072-4194-b3a7-efdf3c371b0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54ddd56b5-bk8rz_calico-apiserver(fe2dc283-a072-4194-b3a7-efdf3c371b0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54ddd56b5-bk8rz" podUID="fe2dc283-a072-4194-b3a7-efdf3c371b0b" Aug 12 23:58:36.784540 env[1323]: time="2025-08-12T23:58:36.784480043Z" level=error msg="Failed to destroy network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.784914 env[1323]: time="2025-08-12T23:58:36.784882254Z" level=error msg="encountered an error cleaning up failed sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.784986 env[1323]: time="2025-08-12T23:58:36.784933101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f6d7688-8r2qj,Uid:bec8aa18-c5b5-455b-98e4-ec5961682033,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.785657 kubelet[2127]: E0812 23:58:36.785371 2127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.785657 kubelet[2127]: E0812 23:58:36.785421 2127 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f6d7688-8r2qj" Aug 12 23:58:36.785657 kubelet[2127]: E0812 23:58:36.785439 2127 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f6d7688-8r2qj" Aug 12 23:58:36.785820 kubelet[2127]: E0812 23:58:36.785473 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f6f6d7688-8r2qj_calico-system(bec8aa18-c5b5-455b-98e4-ec5961682033)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f6f6d7688-8r2qj_calico-system(bec8aa18-c5b5-455b-98e4-ec5961682033)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f6f6d7688-8r2qj" podUID="bec8aa18-c5b5-455b-98e4-ec5961682033" Aug 12 23:58:36.802480 env[1323]: time="2025-08-12T23:58:36.802426998Z" level=error msg="Failed to destroy network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.802964 env[1323]: time="2025-08-12T23:58:36.802933342Z" level=error msg="encountered an error cleaning up failed sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.803110 env[1323]: time="2025-08-12T23:58:36.803062158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzhss,Uid:3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.803657 kubelet[2127]: E0812 23:58:36.803330 2127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.803657 kubelet[2127]: E0812 23:58:36.803378 2127 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzhss" Aug 12 23:58:36.803657 kubelet[2127]: E0812 23:58:36.803395 2127 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzhss" Aug 12 23:58:36.803794 kubelet[2127]: E0812 23:58:36.803436 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wzhss_calico-system(3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wzhss_calico-system(3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzhss" podUID="3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc" Aug 12 23:58:36.812069 env[1323]: time="2025-08-12T23:58:36.812039216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-h599k,Uid:3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8,Namespace:calico-system,Attempt:0,}" Aug 12 23:58:36.819693 kubelet[2127]: E0812 23:58:36.819663 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:36.819693 kubelet[2127]: E0812 23:58:36.819690 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:36.820247 env[1323]: time="2025-08-12T23:58:36.820209491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mxcdd,Uid:9dc08e5e-ae34-4c36-9f26-39270357d1c4,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:36.820404 env[1323]: time="2025-08-12T23:58:36.820210611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-66lv2,Uid:e2d94074-07d3-4e8f-bed7-18c1079c94eb,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:36.822145 env[1323]: time="2025-08-12T23:58:36.822112932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54ddd56b5-vlgsf,Uid:4410b28d-7c64-4d83-b0dc-3486564fba4c,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:58:36.878122 env[1323]: time="2025-08-12T23:58:36.878068063Z" level=error msg="Failed to destroy network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.878596 env[1323]: time="2025-08-12T23:58:36.878562766Z" level=error msg="encountered an error cleaning up failed sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.878772 env[1323]: time="2025-08-12T23:58:36.878743349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-h599k,Uid:3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.879087 kubelet[2127]: E0812 23:58:36.879052 2127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.879160 kubelet[2127]: E0812 23:58:36.879110 2127 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-h599k" Aug 12 23:58:36.879160 kubelet[2127]: E0812 23:58:36.879130 2127 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-h599k" Aug 12 23:58:36.879212 kubelet[2127]: E0812 23:58:36.879175 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-h599k_calico-system(3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-h599k_calico-system(3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-h599k" podUID="3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8" Aug 12 23:58:36.906939 env[1323]: time="2025-08-12T23:58:36.906883715Z" level=error msg="Failed to destroy network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.907487 env[1323]: time="2025-08-12T23:58:36.907453227Z" level=error msg="encountered an error cleaning up failed sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.907636 env[1323]: time="2025-08-12T23:58:36.907590045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mxcdd,Uid:9dc08e5e-ae34-4c36-9f26-39270357d1c4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.907951 kubelet[2127]: E0812 23:58:36.907912 2127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.908037 kubelet[2127]: E0812 23:58:36.907981 2127 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mxcdd" Aug 12 23:58:36.908037 kubelet[2127]: E0812 23:58:36.908000 2127 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mxcdd" Aug 12 23:58:36.908095 kubelet[2127]: E0812 23:58:36.908045 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mxcdd_kube-system(9dc08e5e-ae34-4c36-9f26-39270357d1c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mxcdd_kube-system(9dc08e5e-ae34-4c36-9f26-39270357d1c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mxcdd" podUID="9dc08e5e-ae34-4c36-9f26-39270357d1c4" Aug 12 23:58:36.908173 env[1323]: time="2025-08-12T23:58:36.907918886Z" level=error msg="Failed to destroy network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.909210 env[1323]: time="2025-08-12T23:58:36.909172405Z" level=error msg="encountered an error cleaning up failed sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.909341 env[1323]: time="2025-08-12T23:58:36.909315503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54ddd56b5-vlgsf,Uid:4410b28d-7c64-4d83-b0dc-3486564fba4c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.909637 kubelet[2127]: E0812 23:58:36.909587 2127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.909730 kubelet[2127]: E0812 23:58:36.909656 2127 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54ddd56b5-vlgsf" Aug 12 23:58:36.909730 kubelet[2127]: E0812 23:58:36.909678 2127 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54ddd56b5-vlgsf" Aug 12 23:58:36.909791 kubelet[2127]: E0812 23:58:36.909724 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54ddd56b5-vlgsf_calico-apiserver(4410b28d-7c64-4d83-b0dc-3486564fba4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54ddd56b5-vlgsf_calico-apiserver(4410b28d-7c64-4d83-b0dc-3486564fba4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54ddd56b5-vlgsf" podUID="4410b28d-7c64-4d83-b0dc-3486564fba4c" Aug 12 23:58:36.923102 env[1323]: time="2025-08-12T23:58:36.923046924Z" level=error msg="Failed to destroy network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.923601 env[1323]: time="2025-08-12T23:58:36.923567710Z" level=error msg="encountered an error cleaning up failed sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.923759 env[1323]: time="2025-08-12T23:58:36.923730410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-66lv2,Uid:e2d94074-07d3-4e8f-bed7-18c1079c94eb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.924115 kubelet[2127]: E0812 23:58:36.924052 2127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:36.924194 kubelet[2127]: E0812 23:58:36.924116 2127 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-66lv2" Aug 12 23:58:36.924194 kubelet[2127]: E0812 23:58:36.924135 2127 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-66lv2" Aug 12 23:58:36.924257 kubelet[2127]: E0812 23:58:36.924183 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-66lv2_kube-system(e2d94074-07d3-4e8f-bed7-18c1079c94eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-66lv2_kube-system(e2d94074-07d3-4e8f-bed7-18c1079c94eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-66lv2" podUID="e2d94074-07d3-4e8f-bed7-18c1079c94eb" Aug 12 23:58:37.775934 kubelet[2127]: I0812 23:58:37.775899 2127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:58:37.776873 env[1323]: time="2025-08-12T23:58:37.776827185Z" level=info msg="StopPodSandbox for \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\"" Aug 12 23:58:37.782441 kubelet[2127]: I0812 23:58:37.781647 2127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:58:37.784153 env[1323]: time="2025-08-12T23:58:37.784105032Z" level=info msg="StopPodSandbox for \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\"" Aug 12 23:58:37.784858 kubelet[2127]: I0812 23:58:37.784830 2127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:58:37.785690 env[1323]: time="2025-08-12T23:58:37.785657981Z" level=info msg="StopPodSandbox for \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\"" Aug 12 23:58:37.791544 kubelet[2127]: I0812 23:58:37.791515 2127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:58:37.792290 env[1323]: time="2025-08-12T23:58:37.792251984Z" level=info msg="StopPodSandbox for \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\"" Aug 12 23:58:37.797598 kubelet[2127]: I0812 23:58:37.797558 2127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:58:37.798295 env[1323]: time="2025-08-12T23:58:37.798247435Z" level=info msg="StopPodSandbox for \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\"" Aug 12 23:58:37.800127 kubelet[2127]: I0812 23:58:37.799462 2127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:58:37.800315 env[1323]: time="2025-08-12T23:58:37.800220515Z" level=info msg="StopPodSandbox for \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\"" Aug 12 23:58:37.801314 kubelet[2127]: I0812 23:58:37.800859 2127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:58:37.801405 env[1323]: time="2025-08-12T23:58:37.801367415Z" level=info msg="StopPodSandbox for \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\"" Aug 12 23:58:37.803298 kubelet[2127]: I0812 23:58:37.802675 2127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:58:37.803405 env[1323]: time="2025-08-12T23:58:37.803224562Z" level=info msg="StopPodSandbox for \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\"" Aug 12 23:58:37.827490 env[1323]: time="2025-08-12T23:58:37.827424751Z" level=error msg="StopPodSandbox for \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\" failed" error="failed to destroy network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:37.829609 kubelet[2127]: E0812 23:58:37.828898 2127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:58:37.829609 kubelet[2127]: E0812 23:58:37.828973 2127 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134"} Aug 12 23:58:37.829609 kubelet[2127]: E0812 23:58:37.829065 2127 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9dc08e5e-ae34-4c36-9f26-39270357d1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:58:37.829609 kubelet[2127]: E0812 23:58:37.829099 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9dc08e5e-ae34-4c36-9f26-39270357d1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mxcdd" podUID="9dc08e5e-ae34-4c36-9f26-39270357d1c4" Aug 12 23:58:37.883420 env[1323]: time="2025-08-12T23:58:37.883365928Z" level=error msg="StopPodSandbox for \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\" failed" error="failed to destroy network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:37.883981 kubelet[2127]: E0812 23:58:37.883802 2127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:58:37.883981 kubelet[2127]: E0812 23:58:37.883864 2127 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f"} Aug 12 23:58:37.883981 kubelet[2127]: E0812 23:58:37.883900 2127 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:58:37.883981 kubelet[2127]: E0812 23:58:37.883931 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzhss" podUID="3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc" Aug 12 23:58:37.889233 env[1323]: time="2025-08-12T23:58:37.889178316Z" level=error msg="StopPodSandbox for \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\" failed" error="failed to destroy network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:37.889567 env[1323]: time="2025-08-12T23:58:37.889453750Z" level=error msg="StopPodSandbox for \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\" failed" error="failed to destroy network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:37.889960 kubelet[2127]: E0812 23:58:37.889803 2127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:58:37.889960 kubelet[2127]: E0812 23:58:37.889858 2127 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474"} Aug 12 23:58:37.889960 kubelet[2127]: E0812 23:58:37.889890 2127 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4410b28d-7c64-4d83-b0dc-3486564fba4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:58:37.889960 kubelet[2127]: E0812 23:58:37.889910 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4410b28d-7c64-4d83-b0dc-3486564fba4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54ddd56b5-vlgsf" podUID="4410b28d-7c64-4d83-b0dc-3486564fba4c" Aug 12 23:58:37.890538 kubelet[2127]: E0812 23:58:37.890353 2127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:58:37.890538 kubelet[2127]: E0812 23:58:37.890393 2127 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea"} Aug 12 23:58:37.890538 kubelet[2127]: E0812 23:58:37.890423 2127 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:58:37.890538 kubelet[2127]: E0812 23:58:37.890511 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-h599k" podUID="3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8" Aug 12 23:58:37.907075 env[1323]: time="2025-08-12T23:58:37.907005448Z" level=error msg="StopPodSandbox for \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\" failed" error="failed to destroy network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:37.907588 kubelet[2127]: E0812 23:58:37.907394 2127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:58:37.907588 kubelet[2127]: E0812 23:58:37.907476 2127 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4"} Aug 12 23:58:37.907588 kubelet[2127]: E0812 23:58:37.907520 2127 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2d94074-07d3-4e8f-bed7-18c1079c94eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:58:37.907588 kubelet[2127]: E0812 23:58:37.907549 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2d94074-07d3-4e8f-bed7-18c1079c94eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-66lv2" podUID="e2d94074-07d3-4e8f-bed7-18c1079c94eb" Aug 12 23:58:37.921076 env[1323]: time="2025-08-12T23:58:37.921014596Z" level=error msg="StopPodSandbox for \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\" failed" error="failed to destroy network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:37.921339 kubelet[2127]: E0812 23:58:37.921289 2127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:58:37.921455 kubelet[2127]: E0812 23:58:37.921354 2127 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2"} Aug 12 23:58:37.921455 kubelet[2127]: E0812 23:58:37.921393 2127 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bec8aa18-c5b5-455b-98e4-ec5961682033\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:58:37.921455 kubelet[2127]: E0812 23:58:37.921433 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bec8aa18-c5b5-455b-98e4-ec5961682033\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f6f6d7688-8r2qj" podUID="bec8aa18-c5b5-455b-98e4-ec5961682033" Aug 12 23:58:37.924911 env[1323]: time="2025-08-12T23:58:37.924852463Z" level=error msg="StopPodSandbox for \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\" failed" error="failed to destroy network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:37.925322 kubelet[2127]: E0812 23:58:37.925157 2127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:58:37.925322 kubelet[2127]: E0812 23:58:37.925225 2127 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316"} Aug 12 23:58:37.925322 kubelet[2127]: E0812 23:58:37.925258 2127 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe2dc283-a072-4194-b3a7-efdf3c371b0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:58:37.925322 kubelet[2127]: E0812 23:58:37.925294 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe2dc283-a072-4194-b3a7-efdf3c371b0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54ddd56b5-bk8rz" podUID="fe2dc283-a072-4194-b3a7-efdf3c371b0b" Aug 12 23:58:37.960909 env[1323]: time="2025-08-12T23:58:37.960855411Z" level=error msg="StopPodSandbox for \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\" failed" error="failed to destroy network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:58:37.961444 kubelet[2127]: E0812 23:58:37.961286 2127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:58:37.961444 kubelet[2127]: E0812 23:58:37.961345 2127 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90"} Aug 12 23:58:37.961444 kubelet[2127]: E0812 23:58:37.961378 2127 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9c82a29-832e-4f27-bc43-e1ba46fc34e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:58:37.961444 kubelet[2127]: E0812 23:58:37.961407 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9c82a29-832e-4f27-bc43-e1ba46fc34e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-645c974fb8-zw4bc" podUID="d9c82a29-832e-4f27-bc43-e1ba46fc34e5" Aug 12 23:58:41.186611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160992809.mount: Deactivated successfully. Aug 12 23:58:41.551424 env[1323]: time="2025-08-12T23:58:41.551380370Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:41.555193 env[1323]: time="2025-08-12T23:58:41.555154847Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:41.557812 env[1323]: time="2025-08-12T23:58:41.557764762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:41.559445 env[1323]: time="2025-08-12T23:58:41.559411455Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:41.559978 env[1323]: time="2025-08-12T23:58:41.559899787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 12 23:58:41.577530 env[1323]: time="2025-08-12T23:58:41.577482357Z" level=info msg="CreateContainer within sandbox \"438941b7b197342a6eb48acb0cd190d5dc121dc4f75f6051b6fffdae61c67d0a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 12 23:58:41.601291 env[1323]: time="2025-08-12T23:58:41.601230297Z" level=info msg="CreateContainer within sandbox \"438941b7b197342a6eb48acb0cd190d5dc121dc4f75f6051b6fffdae61c67d0a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cbcb7c58eb7f78ed8bea814a4f05e90ff612bac1a2b27245de061a2d3ceed4ef\"" Aug 12 23:58:41.602293 env[1323]: time="2025-08-12T23:58:41.602241323Z" level=info msg="StartContainer for \"cbcb7c58eb7f78ed8bea814a4f05e90ff612bac1a2b27245de061a2d3ceed4ef\"" Aug 12 23:58:41.696008 env[1323]: time="2025-08-12T23:58:41.695938305Z" level=info msg="StartContainer for \"cbcb7c58eb7f78ed8bea814a4f05e90ff612bac1a2b27245de061a2d3ceed4ef\" returns successfully" Aug 12 23:58:41.831801 kubelet[2127]: I0812 23:58:41.831674 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mlxzr" podStartSLOduration=1.633709642 podStartE2EDuration="13.831616066s" podCreationTimestamp="2025-08-12 23:58:28 +0000 UTC" firstStartedPulling="2025-08-12 23:58:29.363122402 +0000 UTC m=+18.760627153" lastFinishedPulling="2025-08-12 23:58:41.561028826 +0000 UTC m=+30.958533577" observedRunningTime="2025-08-12 23:58:41.830433382 +0000 UTC m=+31.227938133" watchObservedRunningTime="2025-08-12 23:58:41.831616066 +0000 UTC m=+31.229120817" Aug 12 23:58:41.928898 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 12 23:58:41.929228 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 12 23:58:42.055772 env[1323]: time="2025-08-12T23:58:42.055713903Z" level=info msg="StopPodSandbox for \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\"" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.198 [INFO][3415] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.198 [INFO][3415] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" iface="eth0" netns="/var/run/netns/cni-ce76a8cb-b7fc-ada6-1924-0d4b94592d47" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.199 [INFO][3415] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" iface="eth0" netns="/var/run/netns/cni-ce76a8cb-b7fc-ada6-1924-0d4b94592d47" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.201 [INFO][3415] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" iface="eth0" netns="/var/run/netns/cni-ce76a8cb-b7fc-ada6-1924-0d4b94592d47" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.202 [INFO][3415] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.202 [INFO][3415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.346 [INFO][3426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" HandleID="k8s-pod-network.fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Workload="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.346 [INFO][3426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.346 [INFO][3426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.364 [WARNING][3426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" HandleID="k8s-pod-network.fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Workload="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.364 [INFO][3426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" HandleID="k8s-pod-network.fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Workload="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.368 [INFO][3426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:42.372562 env[1323]: 2025-08-12 23:58:42.370 [INFO][3415] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:58:42.376211 env[1323]: time="2025-08-12T23:58:42.375786064Z" level=info msg="TearDown network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\" successfully" Aug 12 23:58:42.376211 env[1323]: time="2025-08-12T23:58:42.375822587Z" level=info msg="StopPodSandbox for \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\" returns successfully" Aug 12 23:58:42.374953 systemd[1]: run-netns-cni\x2dce76a8cb\x2db7fc\x2dada6\x2d1924\x2d0d4b94592d47.mount: Deactivated successfully. Aug 12 23:58:42.447157 kubelet[2127]: I0812 23:58:42.446921 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bec8aa18-c5b5-455b-98e4-ec5961682033-whisker-backend-key-pair\") pod \"bec8aa18-c5b5-455b-98e4-ec5961682033\" (UID: \"bec8aa18-c5b5-455b-98e4-ec5961682033\") " Aug 12 23:58:42.447157 kubelet[2127]: I0812 23:58:42.446999 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bec8aa18-c5b5-455b-98e4-ec5961682033-whisker-ca-bundle\") pod \"bec8aa18-c5b5-455b-98e4-ec5961682033\" (UID: \"bec8aa18-c5b5-455b-98e4-ec5961682033\") " Aug 12 23:58:42.447157 kubelet[2127]: I0812 23:58:42.447033 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzrzv\" (UniqueName: \"kubernetes.io/projected/bec8aa18-c5b5-455b-98e4-ec5961682033-kube-api-access-kzrzv\") pod \"bec8aa18-c5b5-455b-98e4-ec5961682033\" (UID: \"bec8aa18-c5b5-455b-98e4-ec5961682033\") " Aug 12 23:58:42.456111 kubelet[2127]: I0812 23:58:42.456039 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec8aa18-c5b5-455b-98e4-ec5961682033-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bec8aa18-c5b5-455b-98e4-ec5961682033" (UID: "bec8aa18-c5b5-455b-98e4-ec5961682033"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:58:42.459712 systemd[1]: var-lib-kubelet-pods-bec8aa18\x2dc5b5\x2d455b\x2d98e4\x2dec5961682033-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 12 23:58:42.461699 kubelet[2127]: I0812 23:58:42.461135 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec8aa18-c5b5-455b-98e4-ec5961682033-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bec8aa18-c5b5-455b-98e4-ec5961682033" (UID: "bec8aa18-c5b5-455b-98e4-ec5961682033"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 12 23:58:42.462896 kubelet[2127]: I0812 23:58:42.462418 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec8aa18-c5b5-455b-98e4-ec5961682033-kube-api-access-kzrzv" (OuterVolumeSpecName: "kube-api-access-kzrzv") pod "bec8aa18-c5b5-455b-98e4-ec5961682033" (UID: "bec8aa18-c5b5-455b-98e4-ec5961682033"). InnerVolumeSpecName "kube-api-access-kzrzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:58:42.462821 systemd[1]: var-lib-kubelet-pods-bec8aa18\x2dc5b5\x2d455b\x2d98e4\x2dec5961682033-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkzrzv.mount: Deactivated successfully. Aug 12 23:58:42.547316 kubelet[2127]: I0812 23:58:42.547269 2127 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzrzv\" (UniqueName: \"kubernetes.io/projected/bec8aa18-c5b5-455b-98e4-ec5961682033-kube-api-access-kzrzv\") on node \"localhost\" DevicePath \"\"" Aug 12 23:58:42.547316 kubelet[2127]: I0812 23:58:42.547307 2127 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bec8aa18-c5b5-455b-98e4-ec5961682033-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 12 23:58:42.547316 kubelet[2127]: I0812 23:58:42.547319 2127 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bec8aa18-c5b5-455b-98e4-ec5961682033-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 12 23:58:42.816506 kubelet[2127]: I0812 23:58:42.816476 2127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:58:42.950780 kubelet[2127]: I0812 23:58:42.950738 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l687w\" (UniqueName: \"kubernetes.io/projected/0eb5f687-478f-4b2a-8572-b54ad3e27446-kube-api-access-l687w\") pod \"whisker-6f9d68d456-9stx5\" (UID: \"0eb5f687-478f-4b2a-8572-b54ad3e27446\") " pod="calico-system/whisker-6f9d68d456-9stx5" Aug 12 23:58:42.951195 kubelet[2127]: I0812 23:58:42.951177 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0eb5f687-478f-4b2a-8572-b54ad3e27446-whisker-backend-key-pair\") pod \"whisker-6f9d68d456-9stx5\" (UID: \"0eb5f687-478f-4b2a-8572-b54ad3e27446\") " pod="calico-system/whisker-6f9d68d456-9stx5" Aug 12 23:58:42.951294 kubelet[2127]: I0812 23:58:42.951264 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0eb5f687-478f-4b2a-8572-b54ad3e27446-whisker-ca-bundle\") pod \"whisker-6f9d68d456-9stx5\" (UID: \"0eb5f687-478f-4b2a-8572-b54ad3e27446\") " pod="calico-system/whisker-6f9d68d456-9stx5" Aug 12 23:58:43.177213 env[1323]: time="2025-08-12T23:58:43.177090441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9d68d456-9stx5,Uid:0eb5f687-478f-4b2a-8572-b54ad3e27446,Namespace:calico-system,Attempt:0,}" Aug 12 23:58:43.337339 systemd-networkd[1099]: cali9967b97a9c1: Link UP Aug 12 23:58:43.338782 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 12 23:58:43.338886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9967b97a9c1: link becomes ready Aug 12 23:58:43.339607 systemd-networkd[1099]: cali9967b97a9c1: Gained carrier Aug 12 23:58:43.355000 audit[3505]: AVC avc: denied { write } for pid=3505 comm="tee" name="fd" dev="proc" ino=20647 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 12 23:58:43.355000 audit[3505]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffda6527de a2=241 a3=1b6 items=1 ppid=3485 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:43.361306 kernel: audit: type=1400 audit(1755043123.355:297): avc: denied { write } for pid=3505 comm="tee" name="fd" dev="proc" ino=20647 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 12 23:58:43.361422 kernel: audit: type=1300 audit(1755043123.355:297): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffda6527de a2=241 a3=1b6 items=1 ppid=3485 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:43.361447 kernel: audit: type=1307 audit(1755043123.355:297): cwd="/etc/service/enabled/node-status-reporter/log" Aug 12 23:58:43.355000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 12 23:58:43.355000 audit: PATH item=0 name="/dev/fd/63" inode=18931 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:58:43.364671 kernel: audit: type=1302 audit(1755043123.355:297): item=0 name="/dev/fd/63" inode=18931 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:58:43.364750 kernel: audit: type=1327 audit(1755043123.355:297): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 12 23:58:43.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.221 [INFO][3450] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.236 [INFO][3450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6f9d68d456--9stx5-eth0 whisker-6f9d68d456- calico-system 0eb5f687-478f-4b2a-8572-b54ad3e27446 922 0 2025-08-12 23:58:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f9d68d456 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6f9d68d456-9stx5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9967b97a9c1 [] [] }} ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Namespace="calico-system" Pod="whisker-6f9d68d456-9stx5" WorkloadEndpoint="localhost-k8s-whisker--6f9d68d456--9stx5-" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.237 [INFO][3450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Namespace="calico-system" Pod="whisker-6f9d68d456-9stx5" WorkloadEndpoint="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.273 [INFO][3464] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" HandleID="k8s-pod-network.c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Workload="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.274 [INFO][3464] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" HandleID="k8s-pod-network.c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Workload="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000338050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6f9d68d456-9stx5", "timestamp":"2025-08-12 23:58:43.273929372 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.274 [INFO][3464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.274 [INFO][3464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.274 [INFO][3464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.288 [INFO][3464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" host="localhost" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.300 [INFO][3464] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.305 [INFO][3464] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.307 [INFO][3464] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.310 [INFO][3464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.310 [INFO][3464] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" host="localhost" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.312 [INFO][3464] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044 Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.317 [INFO][3464] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" host="localhost" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.322 [INFO][3464] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" host="localhost" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.323 [INFO][3464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" host="localhost" Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.323 [INFO][3464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:43.372421 env[1323]: 2025-08-12 23:58:43.323 [INFO][3464] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" HandleID="k8s-pod-network.c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Workload="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" Aug 12 23:58:43.373121 env[1323]: 2025-08-12 23:58:43.325 [INFO][3450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Namespace="calico-system" Pod="whisker-6f9d68d456-9stx5" WorkloadEndpoint="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f9d68d456--9stx5-eth0", GenerateName:"whisker-6f9d68d456-", Namespace:"calico-system", SelfLink:"", UID:"0eb5f687-478f-4b2a-8572-b54ad3e27446", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9d68d456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6f9d68d456-9stx5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9967b97a9c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:43.373121 env[1323]: 2025-08-12 23:58:43.325 [INFO][3450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Namespace="calico-system" Pod="whisker-6f9d68d456-9stx5" WorkloadEndpoint="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" Aug 12 23:58:43.373121 env[1323]: 2025-08-12 23:58:43.325 [INFO][3450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9967b97a9c1 ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Namespace="calico-system" Pod="whisker-6f9d68d456-9stx5" WorkloadEndpoint="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" Aug 12 23:58:43.373121 env[1323]: 2025-08-12 23:58:43.339 [INFO][3450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Namespace="calico-system" Pod="whisker-6f9d68d456-9stx5" WorkloadEndpoint="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" Aug 12 23:58:43.373121 env[1323]: 2025-08-12 23:58:43.341 [INFO][3450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Namespace="calico-system" Pod="whisker-6f9d68d456-9stx5" WorkloadEndpoint="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f9d68d456--9stx5-eth0", GenerateName:"whisker-6f9d68d456-", Namespace:"calico-system", SelfLink:"", UID:"0eb5f687-478f-4b2a-8572-b54ad3e27446", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9d68d456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044", Pod:"whisker-6f9d68d456-9stx5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9967b97a9c1", MAC:"5e:73:00:6f:16:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:43.373121 env[1323]: 2025-08-12 23:58:43.353 [INFO][3450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044" Namespace="calico-system" Pod="whisker-6f9d68d456-9stx5" WorkloadEndpoint="localhost-k8s-whisker--6f9d68d456--9stx5-eth0" Aug 12 23:58:43.378000 audit[3507]: AVC avc: denied { write } for pid=3507 comm="tee" name="fd" dev="proc" ino=19818 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 12 23:58:43.378000 audit[3507]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd19b97ed a2=241 a3=1b6 items=1 ppid=3493 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:43.390448 kernel: audit: type=1400 audit(1755043123.378:298): avc: denied { write } for pid=3507 comm="tee" name="fd" dev="proc" ino=19818 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 12 23:58:43.390555 kernel: audit: type=1300 audit(1755043123.378:298): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd19b97ed a2=241 a3=1b6 items=1 ppid=3493 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:43.378000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 12 23:58:43.391697 kernel: audit: type=1307 audit(1755043123.378:298): cwd="/etc/service/enabled/confd/log" Aug 12 23:58:43.378000 audit: PATH item=0 name="/dev/fd/63" inode=18016 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:58:43.393823 kernel: audit: type=1302 audit(1755043123.378:298): item=0 name="/dev/fd/63" inode=18016 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:58:43.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 12 23:58:43.398367 kernel: audit: type=1327 audit(1755043123.378:298): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 12 23:58:43.384000 audit[3544]: AVC avc: denied { write } for pid=3544 comm="tee" name="fd" dev="proc" ino=18036 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 12 23:58:43.384000 audit[3544]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe51907dd a2=241 a3=1b6 items=1 ppid=3498 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:43.384000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 12 23:58:43.384000 audit: PATH item=0 name="/dev/fd/63" inode=19822 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:58:43.384000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 12 23:58:43.385000 audit[3542]: AVC avc: denied { write } for pid=3542 comm="tee" name="fd" dev="proc" ino=19826 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 12 23:58:43.385000 audit[3542]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc8f457ef a2=241 a3=1b6 items=1 ppid=3482 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:43.385000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 12 23:58:43.385000 audit: PATH item=0 name="/dev/fd/63" inode=18031 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:58:43.385000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 12 23:58:43.387000 audit[3549]: AVC avc: denied { write } for pid=3549 comm="tee" name="fd" dev="proc" ino=20659 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 12 23:58:43.387000 audit[3549]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffe6fa7ed a2=241 a3=1b6 items=1 ppid=3491 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:43.387000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 12 23:58:43.387000 audit: PATH item=0 name="/dev/fd/63" inode=19823 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:58:43.387000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 12 23:58:43.404422 env[1323]: time="2025-08-12T23:58:43.404297404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:43.404650 env[1323]: time="2025-08-12T23:58:43.404606074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:43.404766 env[1323]: time="2025-08-12T23:58:43.404726926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:43.405085 env[1323]: time="2025-08-12T23:58:43.405037797Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044 pid=3561 runtime=io.containerd.runc.v2 Aug 12 23:58:43.422000 audit[3585]: AVC avc: denied { write } for pid=3585 comm="tee" name="fd" dev="proc" ino=19857 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 12 23:58:43.422000 audit[3585]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffebc007ee a2=241 a3=1b6 items=1 ppid=3483 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:43.422000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 12 23:58:43.422000 audit: PATH item=0 name="/dev/fd/63" inode=18956 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:58:43.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 12 23:58:43.469000 audit[3599]: AVC avc: denied { write } for pid=3599 comm="tee" name="fd" dev="proc" ino=18066 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 12 23:58:43.469000 audit[3599]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffff9027ed a2=241 a3=1b6 items=1 ppid=3490 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:43.469000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 12 23:58:43.469000 audit: PATH item=0 name="/dev/fd/63" inode=19859 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:58:43.469000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 12 23:58:43.497434 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:58:43.519507 env[1323]: time="2025-08-12T23:58:43.519465219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9d68d456-9stx5,Uid:0eb5f687-478f-4b2a-8572-b54ad3e27446,Namespace:calico-system,Attempt:0,} returns sandbox id \"c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044\"" Aug 12 23:58:43.521529 env[1323]: time="2025-08-12T23:58:43.521480818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 12 23:58:44.575533 env[1323]: time="2025-08-12T23:58:44.575479582Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:44.577564 env[1323]: time="2025-08-12T23:58:44.577522976Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:44.580434 env[1323]: time="2025-08-12T23:58:44.580377209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:44.590537 env[1323]: time="2025-08-12T23:58:44.590490093Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:44.591051 env[1323]: time="2025-08-12T23:58:44.591017223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 12 23:58:44.594307 env[1323]: time="2025-08-12T23:58:44.593729361Z" level=info msg="CreateContainer within sandbox \"c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 12 23:58:44.700398 env[1323]: time="2025-08-12T23:58:44.700336324Z" level=info msg="CreateContainer within sandbox \"c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1886e8a71c8633d1ccf988654f34c9d9c83eb4612d7a6bad0e9071c106b32cc0\"" Aug 12 23:58:44.700936 env[1323]: time="2025-08-12T23:58:44.700901658Z" level=info msg="StartContainer for \"1886e8a71c8633d1ccf988654f34c9d9c83eb4612d7a6bad0e9071c106b32cc0\"" Aug 12 23:58:44.712923 kubelet[2127]: I0812 23:58:44.712885 2127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec8aa18-c5b5-455b-98e4-ec5961682033" path="/var/lib/kubelet/pods/bec8aa18-c5b5-455b-98e4-ec5961682033/volumes" Aug 12 23:58:44.796290 env[1323]: time="2025-08-12T23:58:44.796238706Z" level=info msg="StartContainer for \"1886e8a71c8633d1ccf988654f34c9d9c83eb4612d7a6bad0e9071c106b32cc0\" returns successfully" Aug 12 23:58:44.798665 env[1323]: time="2025-08-12T23:58:44.798465999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 12 23:58:45.194762 systemd-networkd[1099]: cali9967b97a9c1: Gained IPv6LL Aug 12 23:58:46.528225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732185010.mount: Deactivated successfully. Aug 12 23:58:46.552679 env[1323]: time="2025-08-12T23:58:46.551890104Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:46.553766 env[1323]: time="2025-08-12T23:58:46.553704187Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:46.555851 env[1323]: time="2025-08-12T23:58:46.555817777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:46.558142 env[1323]: time="2025-08-12T23:58:46.558105422Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:46.564145 env[1323]: time="2025-08-12T23:58:46.564085798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 12 23:58:46.573787 env[1323]: time="2025-08-12T23:58:46.572766977Z" level=info msg="CreateContainer within sandbox \"c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 12 23:58:46.664812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029414052.mount: Deactivated successfully. Aug 12 23:58:46.676388 env[1323]: time="2025-08-12T23:58:46.676321226Z" level=info msg="CreateContainer within sandbox \"c81ce0cd35e695661423b8ddffcf90922e77593aa981ea716b7fabe926ed9044\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3ca47eb97ccde4fb7d14fd0b02b6553232bc0176d873eaf51159eb867f3d0554\"" Aug 12 23:58:46.677263 env[1323]: time="2025-08-12T23:58:46.677228108Z" level=info msg="StartContainer for \"3ca47eb97ccde4fb7d14fd0b02b6553232bc0176d873eaf51159eb867f3d0554\"" Aug 12 23:58:46.864358 env[1323]: time="2025-08-12T23:58:46.864254444Z" level=info msg="StartContainer for \"3ca47eb97ccde4fb7d14fd0b02b6553232bc0176d873eaf51159eb867f3d0554\" returns successfully" Aug 12 23:58:47.862155 kubelet[2127]: I0812 23:58:47.862094 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6f9d68d456-9stx5" podStartSLOduration=2.8172529710000003 podStartE2EDuration="5.862076554s" podCreationTimestamp="2025-08-12 23:58:42 +0000 UTC" firstStartedPulling="2025-08-12 23:58:43.521090099 +0000 UTC m=+32.918594850" lastFinishedPulling="2025-08-12 23:58:46.565913682 +0000 UTC m=+35.963418433" observedRunningTime="2025-08-12 23:58:47.86123664 +0000 UTC m=+37.258741351" watchObservedRunningTime="2025-08-12 23:58:47.862076554 +0000 UTC m=+37.259581305" Aug 12 23:58:48.029000 audit[3795]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:48.029000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc4c868d0 a2=0 a3=1 items=0 ppid=2235 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:48.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:48.039000 audit[3795]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:48.039000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc4c868d0 a2=0 a3=1 items=0 ppid=2235 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:48.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:49.706372 env[1323]: time="2025-08-12T23:58:49.706316459Z" level=info msg="StopPodSandbox for \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\"" Aug 12 23:58:49.706768 env[1323]: time="2025-08-12T23:58:49.706715132Z" level=info msg="StopPodSandbox for \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\"" Aug 12 23:58:49.706948 env[1323]: time="2025-08-12T23:58:49.706898267Z" level=info msg="StopPodSandbox for \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\"" Aug 12 23:58:49.707100 env[1323]: time="2025-08-12T23:58:49.707072322Z" level=info msg="StopPodSandbox for \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\"" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.857 [INFO][3867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.858 [INFO][3867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" iface="eth0" netns="/var/run/netns/cni-5ea4be0e-cf1c-8a7d-c0fe-e82767c32bac" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.858 [INFO][3867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" iface="eth0" netns="/var/run/netns/cni-5ea4be0e-cf1c-8a7d-c0fe-e82767c32bac" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.858 [INFO][3867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" iface="eth0" netns="/var/run/netns/cni-5ea4be0e-cf1c-8a7d-c0fe-e82767c32bac" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.858 [INFO][3867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.858 [INFO][3867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.885 [INFO][3897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" HandleID="k8s-pod-network.ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.885 [INFO][3897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.885 [INFO][3897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.943 [WARNING][3897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" HandleID="k8s-pod-network.ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:49.943 [INFO][3897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" HandleID="k8s-pod-network.ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:50.017 [INFO][3897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.023197 env[1323]: 2025-08-12 23:58:50.021 [INFO][3867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:58:50.025665 systemd[1]: run-netns-cni\x2d5ea4be0e\x2dcf1c\x2d8a7d\x2dc0fe\x2de82767c32bac.mount: Deactivated successfully. Aug 12 23:58:50.027847 env[1323]: time="2025-08-12T23:58:50.027799329Z" level=info msg="TearDown network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\" successfully" Aug 12 23:58:50.028000 env[1323]: time="2025-08-12T23:58:50.027980944Z" level=info msg="StopPodSandbox for \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\" returns successfully" Aug 12 23:58:50.028982 env[1323]: time="2025-08-12T23:58:50.028936501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzhss,Uid:3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc,Namespace:calico-system,Attempt:1,}" Aug 12 23:58:50.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.49:22-10.0.0.1:56178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:50.110362 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:56178.service. Aug 12 23:58:50.111131 kernel: kauditd_printk_skb: 31 callbacks suppressed Aug 12 23:58:50.111182 kernel: audit: type=1130 audit(1755043130.109:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.49:22-10.0.0.1:56178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:49.941 [INFO][3855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:49.941 [INFO][3855] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" iface="eth0" netns="/var/run/netns/cni-3a691877-6207-0031-0e00-0e47f0b5ca1f" Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:49.941 [INFO][3855] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" iface="eth0" netns="/var/run/netns/cni-3a691877-6207-0031-0e00-0e47f0b5ca1f" Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:49.942 [INFO][3855] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" iface="eth0" netns="/var/run/netns/cni-3a691877-6207-0031-0e00-0e47f0b5ca1f" Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:49.942 [INFO][3855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:49.942 [INFO][3855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:49.976 [INFO][3913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" HandleID="k8s-pod-network.55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:49.976 [INFO][3913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:50.018 [INFO][3913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:50.087 [WARNING][3913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" HandleID="k8s-pod-network.55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:50.087 [INFO][3913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" HandleID="k8s-pod-network.55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:50.096 [INFO][3913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.128477 env[1323]: 2025-08-12 23:58:50.110 [INFO][3855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:58:50.131319 systemd[1]: run-netns-cni\x2d3a691877\x2d6207\x2d0031\x2d0e00\x2d0e47f0b5ca1f.mount: Deactivated successfully. Aug 12 23:58:50.134657 env[1323]: time="2025-08-12T23:58:50.132363457Z" level=info msg="TearDown network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\" successfully" Aug 12 23:58:50.134657 env[1323]: time="2025-08-12T23:58:50.132408941Z" level=info msg="StopPodSandbox for \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\" returns successfully" Aug 12 23:58:50.134657 env[1323]: time="2025-08-12T23:58:50.134079035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645c974fb8-zw4bc,Uid:d9c82a29-832e-4f27-bc43-e1ba46fc34e5,Namespace:calico-system,Attempt:1,}" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:49.941 [INFO][3883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:49.942 [INFO][3883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" iface="eth0" netns="/var/run/netns/cni-a8022eb3-029a-c025-2e65-4a0dc56bf45d" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:49.942 [INFO][3883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" iface="eth0" netns="/var/run/netns/cni-a8022eb3-029a-c025-2e65-4a0dc56bf45d" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:49.942 [INFO][3883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" iface="eth0" netns="/var/run/netns/cni-a8022eb3-029a-c025-2e65-4a0dc56bf45d" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:49.942 [INFO][3883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:49.942 [INFO][3883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:49.980 [INFO][3915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" HandleID="k8s-pod-network.b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:49.980 [INFO][3915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:50.101 [INFO][3915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:50.136 [WARNING][3915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" HandleID="k8s-pod-network.b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:50.136 [INFO][3915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" HandleID="k8s-pod-network.b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:50.139 [INFO][3915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.156801 env[1323]: 2025-08-12 23:58:50.151 [INFO][3883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:58:50.157632 systemd[1]: run-netns-cni\x2da8022eb3\x2d029a\x2dc025\x2d2e65\x2d4a0dc56bf45d.mount: Deactivated successfully. Aug 12 23:58:50.158601 env[1323]: time="2025-08-12T23:58:50.158545323Z" level=info msg="TearDown network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\" successfully" Aug 12 23:58:50.158601 env[1323]: time="2025-08-12T23:58:50.158589326Z" level=info msg="StopPodSandbox for \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\" returns successfully" Aug 12 23:58:50.159621 env[1323]: time="2025-08-12T23:58:50.159548243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54ddd56b5-bk8rz,Uid:fe2dc283-a072-4194-b3a7-efdf3c371b0b,Namespace:calico-apiserver,Attempt:1,}" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.019 [INFO][3877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.019 [INFO][3877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" iface="eth0" netns="/var/run/netns/cni-24022c05-1f77-576e-98a9-0ca62a22888a" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.020 [INFO][3877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" iface="eth0" netns="/var/run/netns/cni-24022c05-1f77-576e-98a9-0ca62a22888a" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.020 [INFO][3877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" iface="eth0" netns="/var/run/netns/cni-24022c05-1f77-576e-98a9-0ca62a22888a" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.020 [INFO][3877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.020 [INFO][3877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.131 [INFO][3930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" HandleID="k8s-pod-network.fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.131 [INFO][3930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.139 [INFO][3930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.154 [WARNING][3930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" HandleID="k8s-pod-network.fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.154 [INFO][3930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" HandleID="k8s-pod-network.fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.156 [INFO][3930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.162286 env[1323]: 2025-08-12 23:58:50.159 [INFO][3877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:58:50.168524 env[1323]: time="2025-08-12T23:58:50.164755902Z" level=info msg="TearDown network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\" successfully" Aug 12 23:58:50.168524 env[1323]: time="2025-08-12T23:58:50.164809626Z" level=info msg="StopPodSandbox for \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\" returns successfully" Aug 12 23:58:50.165693 systemd[1]: run-netns-cni\x2d24022c05\x2d1f77\x2d576e\x2d98a9\x2d0ca62a22888a.mount: Deactivated successfully. Aug 12 23:58:50.168772 kubelet[2127]: E0812 23:58:50.165112 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:50.170855 env[1323]: time="2025-08-12T23:58:50.170819950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-66lv2,Uid:e2d94074-07d3-4e8f-bed7-18c1079c94eb,Namespace:kube-system,Attempt:1,}" Aug 12 23:58:50.185457 sshd[3942]: Accepted publickey for core from 10.0.0.1 port 56178 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:58:50.184000 audit[3942]: USER_ACCT pid=3942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.188670 kernel: audit: type=1101 audit(1755043130.184:307): pid=3942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.188000 audit[3942]: CRED_ACQ pid=3942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.189863 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:58:50.193510 kernel: audit: type=1103 audit(1755043130.188:308): pid=3942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.193604 kernel: audit: type=1006 audit(1755043130.188:309): pid=3942 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Aug 12 23:58:50.188000 audit[3942]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe8e2c30 a2=3 a3=1 items=0 ppid=1 pid=3942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:50.196232 kernel: audit: type=1300 audit(1755043130.188:309): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe8e2c30 a2=3 a3=1 items=0 ppid=1 pid=3942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:50.196374 kernel: audit: type=1327 audit(1755043130.188:309): proctitle=737368643A20636F7265205B707269765D Aug 12 23:58:50.188000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:58:50.203538 systemd-logind[1309]: New session 8 of user core. Aug 12 23:58:50.203622 systemd[1]: Started session-8.scope. Aug 12 23:58:50.212000 audit[3942]: USER_START pid=3942 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.216000 audit[3966]: CRED_ACQ pid=3966 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.220472 kernel: audit: type=1105 audit(1755043130.212:310): pid=3942 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.220559 kernel: audit: type=1103 audit(1755043130.216:311): pid=3966 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.409174 systemd-networkd[1099]: cali31fced89afb: Link UP Aug 12 23:58:50.411573 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 12 23:58:50.411687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali31fced89afb: link becomes ready Aug 12 23:58:50.412276 systemd-networkd[1099]: cali31fced89afb: Gained carrier Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.232 [INFO][3953] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.248 [INFO][3953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wzhss-eth0 csi-node-driver- calico-system 3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc 964 0 2025-08-12 23:58:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wzhss eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali31fced89afb [] [] }} ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Namespace="calico-system" Pod="csi-node-driver-wzhss" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzhss-" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.248 [INFO][3953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Namespace="calico-system" Pod="csi-node-driver-wzhss" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.305 [INFO][3976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" HandleID="k8s-pod-network.85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.306 [INFO][3976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" HandleID="k8s-pod-network.85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000584ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wzhss", "timestamp":"2025-08-12 23:58:50.305931014 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.306 [INFO][3976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.308 [INFO][3976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.308 [INFO][3976] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.323 [INFO][3976] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" host="localhost" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.342 [INFO][3976] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.352 [INFO][3976] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.361 [INFO][3976] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.373 [INFO][3976] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.373 [INFO][3976] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" host="localhost" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.375 [INFO][3976] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.380 [INFO][3976] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" host="localhost" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.393 [INFO][3976] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" host="localhost" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.394 [INFO][3976] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" host="localhost" Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.394 [INFO][3976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.436854 env[1323]: 2025-08-12 23:58:50.394 [INFO][3976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" HandleID="k8s-pod-network.85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.437761 env[1323]: 2025-08-12 23:58:50.400 [INFO][3953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Namespace="calico-system" Pod="csi-node-driver-wzhss" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzhss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzhss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wzhss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31fced89afb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.437761 env[1323]: 2025-08-12 23:58:50.401 [INFO][3953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Namespace="calico-system" Pod="csi-node-driver-wzhss" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.437761 env[1323]: 2025-08-12 23:58:50.401 [INFO][3953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31fced89afb ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Namespace="calico-system" Pod="csi-node-driver-wzhss" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.437761 env[1323]: 2025-08-12 23:58:50.412 [INFO][3953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Namespace="calico-system" Pod="csi-node-driver-wzhss" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.437761 env[1323]: 2025-08-12 23:58:50.413 [INFO][3953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Namespace="calico-system" Pod="csi-node-driver-wzhss" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzhss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzhss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a", Pod:"csi-node-driver-wzhss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31fced89afb", MAC:"9e:e5:71:be:94:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.437761 env[1323]: 2025-08-12 23:58:50.430 [INFO][3953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a" Namespace="calico-system" Pod="csi-node-driver-wzhss" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:58:50.469204 env[1323]: time="2025-08-12T23:58:50.468051931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:50.469204 env[1323]: time="2025-08-12T23:58:50.468097294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:50.469204 env[1323]: time="2025-08-12T23:58:50.468107655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:50.469204 env[1323]: time="2025-08-12T23:58:50.468277149Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a pid=4075 runtime=io.containerd.runc.v2 Aug 12 23:58:50.518097 systemd-networkd[1099]: cali9b6998d238a: Link UP Aug 12 23:58:50.519128 systemd-networkd[1099]: cali9b6998d238a: Gained carrier Aug 12 23:58:50.519660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9b6998d238a: link becomes ready Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.309 [INFO][3983] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.338 [INFO][3983] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0 calico-apiserver-54ddd56b5- calico-apiserver fe2dc283-a072-4194-b3a7-efdf3c371b0b 966 0 2025-08-12 23:58:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54ddd56b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54ddd56b5-bk8rz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b6998d238a [] [] }} ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-bk8rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.338 [INFO][3983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-bk8rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.418 [INFO][4029] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" HandleID="k8s-pod-network.fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.418 [INFO][4029] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" HandleID="k8s-pod-network.fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042c2f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54ddd56b5-bk8rz", "timestamp":"2025-08-12 23:58:50.418064831 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.418 [INFO][4029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.418 [INFO][4029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.418 [INFO][4029] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.434 [INFO][4029] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" host="localhost" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.450 [INFO][4029] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.456 [INFO][4029] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.459 [INFO][4029] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.468 [INFO][4029] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.469 [INFO][4029] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" host="localhost" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.471 [INFO][4029] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3 Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.475 [INFO][4029] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" host="localhost" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.491 [INFO][4029] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" host="localhost" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.491 [INFO][4029] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" host="localhost" Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.491 [INFO][4029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.535288 env[1323]: 2025-08-12 23:58:50.491 [INFO][4029] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" HandleID="k8s-pod-network.fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.536023 env[1323]: 2025-08-12 23:58:50.515 [INFO][3983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-bk8rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0", GenerateName:"calico-apiserver-54ddd56b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe2dc283-a072-4194-b3a7-efdf3c371b0b", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54ddd56b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54ddd56b5-bk8rz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b6998d238a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.536023 env[1323]: 2025-08-12 23:58:50.516 [INFO][3983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-bk8rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.536023 env[1323]: 2025-08-12 23:58:50.516 [INFO][3983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b6998d238a ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-bk8rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.536023 env[1323]: 2025-08-12 23:58:50.518 [INFO][3983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-bk8rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.536023 env[1323]: 2025-08-12 23:58:50.519 [INFO][3983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-bk8rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0", GenerateName:"calico-apiserver-54ddd56b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe2dc283-a072-4194-b3a7-efdf3c371b0b", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54ddd56b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3", Pod:"calico-apiserver-54ddd56b5-bk8rz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b6998d238a", MAC:"c6:b4:7e:84:77:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.536023 env[1323]: 2025-08-12 23:58:50.529 [INFO][3983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-bk8rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:58:50.541832 sshd[3942]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:50.541000 audit[3942]: USER_END pid=3942 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.541000 audit[3942]: CRED_DISP pid=3942 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.546119 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:56178.service: Deactivated successfully. Aug 12 23:58:50.549184 kernel: audit: type=1106 audit(1755043130.541:312): pid=3942 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.549272 kernel: audit: type=1104 audit(1755043130.541:313): pid=3942 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:50.549453 systemd-logind[1309]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:58:50.549470 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:58:50.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.49:22-10.0.0.1:56178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:50.550895 systemd-logind[1309]: Removed session 8. Aug 12 23:58:50.558694 env[1323]: time="2025-08-12T23:58:50.558058528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:50.558694 env[1323]: time="2025-08-12T23:58:50.558107132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:50.558694 env[1323]: time="2025-08-12T23:58:50.558117213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:50.558694 env[1323]: time="2025-08-12T23:58:50.558289387Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3 pid=4121 runtime=io.containerd.runc.v2 Aug 12 23:58:50.560834 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:58:50.586003 env[1323]: time="2025-08-12T23:58:50.585920329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzhss,Uid:3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc,Namespace:calico-system,Attempt:1,} returns sandbox id \"85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a\"" Aug 12 23:58:50.588785 env[1323]: time="2025-08-12T23:58:50.588740355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 12 23:58:50.597831 systemd-networkd[1099]: cali1429fdf9466: Link UP Aug 12 23:58:50.599055 systemd-networkd[1099]: cali1429fdf9466: Gained carrier Aug 12 23:58:50.599735 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1429fdf9466: link becomes ready Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.323 [INFO][3968] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.343 [INFO][3968] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0 calico-kube-controllers-645c974fb8- calico-system d9c82a29-832e-4f27-bc43-e1ba46fc34e5 965 0 2025-08-12 23:58:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:645c974fb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-645c974fb8-zw4bc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1429fdf9466 [] [] }} ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Namespace="calico-system" Pod="calico-kube-controllers-645c974fb8-zw4bc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.343 [INFO][3968] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Namespace="calico-system" Pod="calico-kube-controllers-645c974fb8-zw4bc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.442 [INFO][4036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" HandleID="k8s-pod-network.ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.442 [INFO][4036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" HandleID="k8s-pod-network.ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005184e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-645c974fb8-zw4bc", "timestamp":"2025-08-12 23:58:50.442611925 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.442 [INFO][4036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.500 [INFO][4036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.500 [INFO][4036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.538 [INFO][4036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" host="localhost" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.553 [INFO][4036] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.562 [INFO][4036] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.564 [INFO][4036] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.567 [INFO][4036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.567 [INFO][4036] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" host="localhost" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.569 [INFO][4036] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1 Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.575 [INFO][4036] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" host="localhost" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.582 [INFO][4036] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" host="localhost" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.582 [INFO][4036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" host="localhost" Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.582 [INFO][4036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.614040 env[1323]: 2025-08-12 23:58:50.582 [INFO][4036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" HandleID="k8s-pod-network.ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.614421 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:58:50.614808 env[1323]: 2025-08-12 23:58:50.594 [INFO][3968] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Namespace="calico-system" Pod="calico-kube-controllers-645c974fb8-zw4bc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0", GenerateName:"calico-kube-controllers-645c974fb8-", Namespace:"calico-system", SelfLink:"", UID:"d9c82a29-832e-4f27-bc43-e1ba46fc34e5", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645c974fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-645c974fb8-zw4bc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1429fdf9466", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.614808 env[1323]: 2025-08-12 23:58:50.594 [INFO][3968] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Namespace="calico-system" Pod="calico-kube-controllers-645c974fb8-zw4bc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.614808 env[1323]: 2025-08-12 23:58:50.594 [INFO][3968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1429fdf9466 ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Namespace="calico-system" Pod="calico-kube-controllers-645c974fb8-zw4bc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.614808 env[1323]: 2025-08-12 23:58:50.599 [INFO][3968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Namespace="calico-system" Pod="calico-kube-controllers-645c974fb8-zw4bc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.614808 env[1323]: 2025-08-12 23:58:50.599 [INFO][3968] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Namespace="calico-system" Pod="calico-kube-controllers-645c974fb8-zw4bc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0", GenerateName:"calico-kube-controllers-645c974fb8-", Namespace:"calico-system", SelfLink:"", UID:"d9c82a29-832e-4f27-bc43-e1ba46fc34e5", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645c974fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1", Pod:"calico-kube-controllers-645c974fb8-zw4bc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1429fdf9466", MAC:"7a:3f:c7:7f:6e:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.614808 env[1323]: 2025-08-12 23:58:50.612 [INFO][3968] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1" Namespace="calico-system" Pod="calico-kube-controllers-645c974fb8-zw4bc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:58:50.625606 env[1323]: time="2025-08-12T23:58:50.625115800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:50.625606 env[1323]: time="2025-08-12T23:58:50.625159924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:50.625606 env[1323]: time="2025-08-12T23:58:50.625172125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:50.625606 env[1323]: time="2025-08-12T23:58:50.625441707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1 pid=4170 runtime=io.containerd.runc.v2 Aug 12 23:58:50.633275 env[1323]: time="2025-08-12T23:58:50.633193650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54ddd56b5-bk8rz,Uid:fe2dc283-a072-4194-b3a7-efdf3c371b0b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3\"" Aug 12 23:58:50.662422 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:58:50.684634 env[1323]: time="2025-08-12T23:58:50.684580302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645c974fb8-zw4bc,Uid:d9c82a29-832e-4f27-bc43-e1ba46fc34e5,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1\"" Aug 12 23:58:50.698493 systemd-networkd[1099]: cali72abd4fdbd7: Link UP Aug 12 23:58:50.698714 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali72abd4fdbd7: link becomes ready Aug 12 23:58:50.698740 systemd-networkd[1099]: cali72abd4fdbd7: Gained carrier Aug 12 23:58:50.708109 env[1323]: time="2025-08-12T23:58:50.707875815Z" level=info msg="StopPodSandbox for \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\"" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.363 [INFO][3999] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.395 [INFO][3999] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0 coredns-7c65d6cfc9- kube-system e2d94074-07d3-4e8f-bed7-18c1079c94eb 972 0 2025-08-12 23:58:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-66lv2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali72abd4fdbd7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-66lv2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--66lv2-" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.395 [INFO][3999] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-66lv2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.467 [INFO][4053] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" HandleID="k8s-pod-network.9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.467 [INFO][4053] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" HandleID="k8s-pod-network.9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a45a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-66lv2", "timestamp":"2025-08-12 23:58:50.467102854 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.467 [INFO][4053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.584 [INFO][4053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.586 [INFO][4053] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.635 [INFO][4053] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" host="localhost" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.653 [INFO][4053] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.660 [INFO][4053] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.662 [INFO][4053] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.669 [INFO][4053] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.670 [INFO][4053] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" host="localhost" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.676 [INFO][4053] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62 Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.682 [INFO][4053] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" host="localhost" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.692 [INFO][4053] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" host="localhost" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.692 [INFO][4053] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" host="localhost" Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.692 [INFO][4053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.714314 env[1323]: 2025-08-12 23:58:50.692 [INFO][4053] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" HandleID="k8s-pod-network.9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.714975 env[1323]: 2025-08-12 23:58:50.695 [INFO][3999] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-66lv2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e2d94074-07d3-4e8f-bed7-18c1079c94eb", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-66lv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72abd4fdbd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.714975 env[1323]: 2025-08-12 23:58:50.695 [INFO][3999] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-66lv2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.714975 env[1323]: 2025-08-12 23:58:50.695 [INFO][3999] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72abd4fdbd7 ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-66lv2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.714975 env[1323]: 2025-08-12 23:58:50.699 [INFO][3999] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-66lv2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.714975 env[1323]: 2025-08-12 23:58:50.700 [INFO][3999] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-66lv2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e2d94074-07d3-4e8f-bed7-18c1079c94eb", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62", Pod:"coredns-7c65d6cfc9-66lv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72abd4fdbd7", MAC:"e6:16:3c:dc:38:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.714975 env[1323]: 2025-08-12 23:58:50.712 [INFO][3999] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-66lv2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:58:50.726966 env[1323]: time="2025-08-12T23:58:50.726887584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:50.726966 env[1323]: time="2025-08-12T23:58:50.726930348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:50.726966 env[1323]: time="2025-08-12T23:58:50.726954309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:50.727224 env[1323]: time="2025-08-12T23:58:50.727177407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62 pid=4240 runtime=io.containerd.runc.v2 Aug 12 23:58:50.756790 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:58:50.779878 env[1323]: time="2025-08-12T23:58:50.779825481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-66lv2,Uid:e2d94074-07d3-4e8f-bed7-18c1079c94eb,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62\"" Aug 12 23:58:50.781867 kubelet[2127]: E0812 23:58:50.780694 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:50.783159 env[1323]: time="2025-08-12T23:58:50.783110025Z" level=info msg="CreateContainer within sandbox \"9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:58:50.805958 env[1323]: time="2025-08-12T23:58:50.805902298Z" level=info msg="CreateContainer within sandbox \"9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"def27c19277d9a9c6f93ae4813e3fd10eba6f7e4fbe0f50b167a9e35037aef95\"" Aug 12 23:58:50.807863 env[1323]: time="2025-08-12T23:58:50.807003386Z" level=info msg="StartContainer for \"def27c19277d9a9c6f93ae4813e3fd10eba6f7e4fbe0f50b167a9e35037aef95\"" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.769 [INFO][4224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.770 [INFO][4224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" iface="eth0" netns="/var/run/netns/cni-2fc70d31-dd69-4bb1-8209-b985a9fda0ad" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.771 [INFO][4224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" iface="eth0" netns="/var/run/netns/cni-2fc70d31-dd69-4bb1-8209-b985a9fda0ad" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.771 [INFO][4224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" iface="eth0" netns="/var/run/netns/cni-2fc70d31-dd69-4bb1-8209-b985a9fda0ad" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.771 [INFO][4224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.771 [INFO][4224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.795 [INFO][4273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" HandleID="k8s-pod-network.214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.795 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.795 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.805 [WARNING][4273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" HandleID="k8s-pod-network.214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.805 [INFO][4273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" HandleID="k8s-pod-network.214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.807 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.813102 env[1323]: 2025-08-12 23:58:50.811 [INFO][4224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:58:50.813533 env[1323]: time="2025-08-12T23:58:50.813247008Z" level=info msg="TearDown network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\" successfully" Aug 12 23:58:50.813533 env[1323]: time="2025-08-12T23:58:50.813280171Z" level=info msg="StopPodSandbox for \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\" returns successfully" Aug 12 23:58:50.813915 env[1323]: time="2025-08-12T23:58:50.813885900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-h599k,Uid:3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8,Namespace:calico-system,Attempt:1,}" Aug 12 23:58:50.871556 env[1323]: time="2025-08-12T23:58:50.871497332Z" level=info msg="StartContainer for \"def27c19277d9a9c6f93ae4813e3fd10eba6f7e4fbe0f50b167a9e35037aef95\" returns successfully" Aug 12 23:58:50.955038 systemd-networkd[1099]: cali580531e3e15: Link UP Aug 12 23:58:50.956517 systemd-networkd[1099]: cali580531e3e15: Gained carrier Aug 12 23:58:50.956660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali580531e3e15: link becomes ready Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.859 [INFO][4309] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.881 [INFO][4309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--h599k-eth0 goldmane-58fd7646b9- calico-system 3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8 1021 0 2025-08-12 23:58:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-h599k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali580531e3e15 [] [] }} ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Namespace="calico-system" Pod="goldmane-58fd7646b9-h599k" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--h599k-" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.881 [INFO][4309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Namespace="calico-system" Pod="goldmane-58fd7646b9-h599k" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.905 [INFO][4336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" HandleID="k8s-pod-network.3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.905 [INFO][4336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" HandleID="k8s-pod-network.3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-h599k", "timestamp":"2025-08-12 23:58:50.905604875 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.905 [INFO][4336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.905 [INFO][4336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.905 [INFO][4336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.917 [INFO][4336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" host="localhost" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.923 [INFO][4336] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.928 [INFO][4336] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.931 [INFO][4336] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.933 [INFO][4336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.933 [INFO][4336] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" host="localhost" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.935 [INFO][4336] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.940 [INFO][4336] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" host="localhost" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.948 [INFO][4336] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" host="localhost" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.948 [INFO][4336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" host="localhost" Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.948 [INFO][4336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:50.972423 env[1323]: 2025-08-12 23:58:50.948 [INFO][4336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" HandleID="k8s-pod-network.3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.973276 env[1323]: 2025-08-12 23:58:50.951 [INFO][4309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Namespace="calico-system" Pod="goldmane-58fd7646b9-h599k" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--h599k-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-h599k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali580531e3e15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.973276 env[1323]: 2025-08-12 23:58:50.951 [INFO][4309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Namespace="calico-system" Pod="goldmane-58fd7646b9-h599k" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.973276 env[1323]: 2025-08-12 23:58:50.951 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali580531e3e15 ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Namespace="calico-system" Pod="goldmane-58fd7646b9-h599k" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.973276 env[1323]: 2025-08-12 23:58:50.956 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Namespace="calico-system" Pod="goldmane-58fd7646b9-h599k" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.973276 env[1323]: 2025-08-12 23:58:50.957 [INFO][4309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Namespace="calico-system" Pod="goldmane-58fd7646b9-h599k" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--h599k-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f", Pod:"goldmane-58fd7646b9-h599k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali580531e3e15", MAC:"56:28:15:aa:10:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:50.973276 env[1323]: 2025-08-12 23:58:50.969 [INFO][4309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f" Namespace="calico-system" Pod="goldmane-58fd7646b9-h599k" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:58:50.984824 env[1323]: time="2025-08-12T23:58:50.984550663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:50.985118 env[1323]: time="2025-08-12T23:58:50.984800163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:50.985287 env[1323]: time="2025-08-12T23:58:50.985198475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:50.986430 env[1323]: time="2025-08-12T23:58:50.986291523Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f pid=4363 runtime=io.containerd.runc.v2 Aug 12 23:58:51.034370 systemd[1]: run-netns-cni\x2d2fc70d31\x2ddd69\x2d4bb1\x2d8209\x2db985a9fda0ad.mount: Deactivated successfully. Aug 12 23:58:51.041368 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:58:51.066873 env[1323]: time="2025-08-12T23:58:51.066822550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-h599k,Uid:3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8,Namespace:calico-system,Attempt:1,} returns sandbox id \"3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f\"" Aug 12 23:58:51.594752 systemd-networkd[1099]: cali31fced89afb: Gained IPv6LL Aug 12 23:58:51.637709 kubelet[2127]: I0812 23:58:51.635196 2127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:58:51.637709 kubelet[2127]: E0812 23:58:51.635609 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:51.654366 env[1323]: time="2025-08-12T23:58:51.654321994Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:51.659967 env[1323]: time="2025-08-12T23:58:51.659913272Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:51.664081 env[1323]: time="2025-08-12T23:58:51.664037636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:51.666234 env[1323]: time="2025-08-12T23:58:51.666150962Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:51.666680 env[1323]: time="2025-08-12T23:58:51.666543473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 12 23:58:51.668410 env[1323]: time="2025-08-12T23:58:51.668345574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:58:51.671989 env[1323]: time="2025-08-12T23:58:51.671924815Z" level=info msg="CreateContainer within sandbox \"85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 12 23:58:51.698000 audit[4423]: NETFILTER_CFG table=filter:101 family=2 entries=19 op=nft_register_rule pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:51.698000 audit[4423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffb4e5570 a2=0 a3=1 items=0 ppid=2235 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:51.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:51.701873 env[1323]: time="2025-08-12T23:58:51.701807799Z" level=info msg="CreateContainer within sandbox \"85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"197f411f47422d377b170c173dad91fd498886acb9f79a29c83c62560b979840\"" Aug 12 23:58:51.702487 env[1323]: time="2025-08-12T23:58:51.702439368Z" level=info msg="StartContainer for \"197f411f47422d377b170c173dad91fd498886acb9f79a29c83c62560b979840\"" Aug 12 23:58:51.706364 env[1323]: time="2025-08-12T23:58:51.706320633Z" level=info msg="StopPodSandbox for \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\"" Aug 12 23:58:51.706821 env[1323]: time="2025-08-12T23:58:51.706792750Z" level=info msg="StopPodSandbox for \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\"" Aug 12 23:58:51.707000 audit[4423]: NETFILTER_CFG table=nat:102 family=2 entries=21 op=nft_register_chain pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:51.707000 audit[4423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=fffffb4e5570 a2=0 a3=1 items=0 ppid=2235 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:51.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:51.790263 systemd-networkd[1099]: cali9b6998d238a: Gained IPv6LL Aug 12 23:58:51.841291 env[1323]: time="2025-08-12T23:58:51.840854226Z" level=info msg="StartContainer for \"197f411f47422d377b170c173dad91fd498886acb9f79a29c83c62560b979840\" returns successfully" Aug 12 23:58:51.854956 kubelet[2127]: E0812 23:58:51.854818 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:51.856615 kubelet[2127]: E0812 23:58:51.855731 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.790 [INFO][4461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.791 [INFO][4461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" iface="eth0" netns="/var/run/netns/cni-714446c4-a513-8c56-473b-9e5fed87204d" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.791 [INFO][4461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" iface="eth0" netns="/var/run/netns/cni-714446c4-a513-8c56-473b-9e5fed87204d" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.791 [INFO][4461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" iface="eth0" netns="/var/run/netns/cni-714446c4-a513-8c56-473b-9e5fed87204d" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.791 [INFO][4461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.791 [INFO][4461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.829 [INFO][4487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" HandleID="k8s-pod-network.3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.830 [INFO][4487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.830 [INFO][4487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.839 [WARNING][4487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" HandleID="k8s-pod-network.3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.839 [INFO][4487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" HandleID="k8s-pod-network.3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.843 [INFO][4487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:51.862938 env[1323]: 2025-08-12 23:58:51.856 [INFO][4461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:58:51.862938 env[1323]: time="2025-08-12T23:58:51.861227904Z" level=info msg="TearDown network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\" successfully" Aug 12 23:58:51.862938 env[1323]: time="2025-08-12T23:58:51.861265867Z" level=info msg="StopPodSandbox for \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\" returns successfully" Aug 12 23:58:51.862938 env[1323]: time="2025-08-12T23:58:51.861968722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mxcdd,Uid:9dc08e5e-ae34-4c36-9f26-39270357d1c4,Namespace:kube-system,Attempt:1,}" Aug 12 23:58:51.863434 kubelet[2127]: E0812 23:58:51.861535 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:51.870153 kubelet[2127]: I0812 23:58:51.868721 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-66lv2" podStartSLOduration=36.868689849 podStartE2EDuration="36.868689849s" podCreationTimestamp="2025-08-12 23:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:58:51.868127085 +0000 UTC m=+41.265631836" watchObservedRunningTime="2025-08-12 23:58:51.868689849 +0000 UTC m=+41.266194600" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.794 [INFO][4449] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.795 [INFO][4449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" iface="eth0" netns="/var/run/netns/cni-43c013c7-7d73-f13f-5a0b-9c3f34620a7a" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.797 [INFO][4449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" iface="eth0" netns="/var/run/netns/cni-43c013c7-7d73-f13f-5a0b-9c3f34620a7a" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.797 [INFO][4449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" iface="eth0" netns="/var/run/netns/cni-43c013c7-7d73-f13f-5a0b-9c3f34620a7a" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.797 [INFO][4449] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.798 [INFO][4449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.834 [INFO][4493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" HandleID="k8s-pod-network.4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.834 [INFO][4493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.843 [INFO][4493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.867 [WARNING][4493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" HandleID="k8s-pod-network.4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.867 [INFO][4493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" HandleID="k8s-pod-network.4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.871 [INFO][4493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:51.877154 env[1323]: 2025-08-12 23:58:51.873 [INFO][4449] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:58:51.880993 env[1323]: time="2025-08-12T23:58:51.877351328Z" level=info msg="TearDown network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\" successfully" Aug 12 23:58:51.880993 env[1323]: time="2025-08-12T23:58:51.877382251Z" level=info msg="StopPodSandbox for \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\" returns successfully" Aug 12 23:58:51.880993 env[1323]: time="2025-08-12T23:58:51.879504697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54ddd56b5-vlgsf,Uid:4410b28d-7c64-4d83-b0dc-3486564fba4c,Namespace:calico-apiserver,Attempt:1,}" Aug 12 23:58:51.881000 audit[4513]: NETFILTER_CFG table=filter:103 family=2 entries=18 op=nft_register_rule pid=4513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:51.881000 audit[4513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe2af1150 a2=0 a3=1 items=0 ppid=2235 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:51.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:51.893000 audit[4513]: NETFILTER_CFG table=nat:104 family=2 entries=16 op=nft_register_rule pid=4513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:51.893000 audit[4513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=ffffe2af1150 a2=0 a3=1 items=0 ppid=2235 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:51.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:52.027833 systemd[1]: run-netns-cni\x2d43c013c7\x2d7d73\x2df13f\x2d5a0b\x2d9c3f34620a7a.mount: Deactivated successfully. Aug 12 23:58:52.027968 systemd[1]: run-netns-cni\x2d714446c4\x2da513\x2d8c56\x2d473b\x2d9e5fed87204d.mount: Deactivated successfully. Aug 12 23:58:52.042763 systemd-networkd[1099]: cali1429fdf9466: Gained IPv6LL Aug 12 23:58:52.078693 systemd-networkd[1099]: cali0962282477f: Link UP Aug 12 23:58:52.079640 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 12 23:58:52.079693 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0962282477f: link becomes ready Aug 12 23:58:52.079703 systemd-networkd[1099]: cali0962282477f: Gained carrier Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:51.931 [INFO][4526] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:51.951 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0 calico-apiserver-54ddd56b5- calico-apiserver 4410b28d-7c64-4d83-b0dc-3486564fba4c 1047 0 2025-08-12 23:58:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54ddd56b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54ddd56b5-vlgsf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0962282477f [] [] }} ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-vlgsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:51.951 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-vlgsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:51.980 [INFO][4544] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" HandleID="k8s-pod-network.1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:51.980 [INFO][4544] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" HandleID="k8s-pod-network.1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54ddd56b5-vlgsf", "timestamp":"2025-08-12 23:58:51.980069546 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:51.980 [INFO][4544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:51.980 [INFO][4544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:51.980 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:51.989 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" host="localhost" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.007 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.014 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.018 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.024 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.024 [INFO][4544] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" host="localhost" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.030 [INFO][4544] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.042 [INFO][4544] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" host="localhost" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.073 [INFO][4544] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" host="localhost" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.073 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" host="localhost" Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.073 [INFO][4544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:52.124767 env[1323]: 2025-08-12 23:58:52.073 [INFO][4544] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" HandleID="k8s-pod-network.1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:52.126361 env[1323]: 2025-08-12 23:58:52.076 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-vlgsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0", GenerateName:"calico-apiserver-54ddd56b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4410b28d-7c64-4d83-b0dc-3486564fba4c", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54ddd56b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54ddd56b5-vlgsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0962282477f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:52.126361 env[1323]: 2025-08-12 23:58:52.076 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-vlgsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:52.126361 env[1323]: 2025-08-12 23:58:52.077 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0962282477f ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-vlgsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:52.126361 env[1323]: 2025-08-12 23:58:52.080 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-vlgsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:52.126361 env[1323]: 2025-08-12 23:58:52.080 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-vlgsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0", GenerateName:"calico-apiserver-54ddd56b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4410b28d-7c64-4d83-b0dc-3486564fba4c", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54ddd56b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c", Pod:"calico-apiserver-54ddd56b5-vlgsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0962282477f", MAC:"ce:59:5a:5c:77:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:52.126361 env[1323]: 2025-08-12 23:58:52.121 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c" Namespace="calico-apiserver" Pod="calico-apiserver-54ddd56b5-vlgsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:58:52.147824 env[1323]: time="2025-08-12T23:58:52.147731627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:52.147974 env[1323]: time="2025-08-12T23:58:52.147846196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:52.147974 env[1323]: time="2025-08-12T23:58:52.147883239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:52.148144 env[1323]: time="2025-08-12T23:58:52.148111656Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c pid=4583 runtime=io.containerd.runc.v2 Aug 12 23:58:52.179965 systemd-networkd[1099]: cali63886eadb58: Link UP Aug 12 23:58:52.182288 systemd-networkd[1099]: cali63886eadb58: Gained carrier Aug 12 23:58:52.182656 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali63886eadb58: link becomes ready Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:51.931 [INFO][4515] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:51.954 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0 coredns-7c65d6cfc9- kube-system 9dc08e5e-ae34-4c36-9f26-39270357d1c4 1046 0 2025-08-12 23:58:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mxcdd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63886eadb58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mxcdd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mxcdd-" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:51.954 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mxcdd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:51.980 [INFO][4545] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" HandleID="k8s-pod-network.107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:51.980 [INFO][4545] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" HandleID="k8s-pod-network.107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000116ed0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mxcdd", "timestamp":"2025-08-12 23:58:51.980291763 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:51.980 [INFO][4545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.073 [INFO][4545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.073 [INFO][4545] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.121 [INFO][4545] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" host="localhost" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.136 [INFO][4545] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.145 [INFO][4545] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.149 [INFO][4545] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.153 [INFO][4545] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.153 [INFO][4545] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" host="localhost" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.155 [INFO][4545] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.160 [INFO][4545] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" host="localhost" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.168 [INFO][4545] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" host="localhost" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.168 [INFO][4545] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" host="localhost" Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.168 [INFO][4545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:58:52.203196 env[1323]: 2025-08-12 23:58:52.168 [INFO][4545] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" HandleID="k8s-pod-network.107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:52.203968 env[1323]: 2025-08-12 23:58:52.171 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mxcdd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9dc08e5e-ae34-4c36-9f26-39270357d1c4", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mxcdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63886eadb58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:52.203968 env[1323]: 2025-08-12 23:58:52.171 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mxcdd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:52.203968 env[1323]: 2025-08-12 23:58:52.171 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63886eadb58 ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mxcdd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:52.203968 env[1323]: 2025-08-12 23:58:52.182 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mxcdd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:52.203968 env[1323]: 2025-08-12 23:58:52.183 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mxcdd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9dc08e5e-ae34-4c36-9f26-39270357d1c4", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e", Pod:"coredns-7c65d6cfc9-mxcdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63886eadb58", MAC:"b6:a8:ed:8f:a8:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:58:52.203968 env[1323]: 2025-08-12 23:58:52.199 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mxcdd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:58:52.205384 systemd[1]: run-containerd-runc-k8s.io-1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c-runc.1K6Ecb.mount: Deactivated successfully. Aug 12 23:58:52.216883 env[1323]: time="2025-08-12T23:58:52.216550778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:52.216883 env[1323]: time="2025-08-12T23:58:52.216654146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:52.216883 env[1323]: time="2025-08-12T23:58:52.216714631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:52.218592 env[1323]: time="2025-08-12T23:58:52.216977491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e pid=4621 runtime=io.containerd.runc.v2 Aug 12 23:58:52.263134 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:58:52.277045 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:58:52.306606 env[1323]: time="2025-08-12T23:58:52.306554912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54ddd56b5-vlgsf,Uid:4410b28d-7c64-4d83-b0dc-3486564fba4c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c\"" Aug 12 23:58:52.315931 env[1323]: time="2025-08-12T23:58:52.310994372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mxcdd,Uid:9dc08e5e-ae34-4c36-9f26-39270357d1c4,Namespace:kube-system,Attempt:1,} returns sandbox id \"107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e\"" Aug 12 23:58:52.317720 kubelet[2127]: E0812 23:58:52.317409 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:52.320925 env[1323]: time="2025-08-12T23:58:52.320883889Z" level=info msg="CreateContainer within sandbox \"107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:58:52.358961 env[1323]: time="2025-08-12T23:58:52.356415050Z" level=info msg="CreateContainer within sandbox \"107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7647bf94986229f12b715679b7cb3200f4178ef3fcad3c2a40ed5f54058ad73\"" Aug 12 23:58:52.359672 env[1323]: time="2025-08-12T23:58:52.359618296Z" level=info msg="StartContainer for \"c7647bf94986229f12b715679b7cb3200f4178ef3fcad3c2a40ed5f54058ad73\"" Aug 12 23:58:52.396000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.396000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.396000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.396000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.396000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.396000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.396000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.396000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.396000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.396000 audit: BPF prog-id=10 op=LOAD Aug 12 23:58:52.396000 audit[4716]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc1a40228 a2=98 a3=ffffc1a40218 items=0 ppid=4595 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.396000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 12 23:58:52.404000 audit: BPF prog-id=10 op=UNLOAD Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit: BPF prog-id=11 op=LOAD Aug 12 23:58:52.404000 audit[4716]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc1a400d8 a2=74 a3=95 items=0 ppid=4595 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.404000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 12 23:58:52.404000 audit: BPF prog-id=11 op=UNLOAD Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { bpf } for pid=4716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit: BPF prog-id=12 op=LOAD Aug 12 23:58:52.404000 audit[4716]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc1a40108 a2=40 a3=ffffc1a40138 items=0 ppid=4595 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.404000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 12 23:58:52.404000 audit: BPF prog-id=12 op=UNLOAD Aug 12 23:58:52.404000 audit[4716]: AVC avc: denied { perfmon } for pid=4716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.404000 audit[4716]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffc1a40220 a2=50 a3=0 items=0 ppid=4595 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.404000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 12 23:58:52.408000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.408000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.408000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.408000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.408000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.408000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.408000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.408000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.408000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.408000 audit: BPF prog-id=13 op=LOAD Aug 12 23:58:52.408000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd966be8 a2=98 a3=ffffcd966bd8 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.408000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.409000 audit: BPF prog-id=13 op=UNLOAD Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit: BPF prog-id=14 op=LOAD Aug 12 23:58:52.409000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd966878 a2=74 a3=95 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.409000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.409000 audit: BPF prog-id=14 op=UNLOAD Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.409000 audit: BPF prog-id=15 op=LOAD Aug 12 23:58:52.409000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd9668d8 a2=94 a3=2 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.409000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.409000 audit: BPF prog-id=15 op=UNLOAD Aug 12 23:58:52.454609 env[1323]: time="2025-08-12T23:58:52.454549727Z" level=info msg="StartContainer for \"c7647bf94986229f12b715679b7cb3200f4178ef3fcad3c2a40ed5f54058ad73\" returns successfully" Aug 12 23:58:52.521000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.521000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.521000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.521000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.521000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.521000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.521000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.521000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.521000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.521000 audit: BPF prog-id=16 op=LOAD Aug 12 23:58:52.521000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd966898 a2=40 a3=ffffcd9668c8 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.521000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.522000 audit: BPF prog-id=16 op=UNLOAD Aug 12 23:58:52.522000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.522000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcd9669b0 a2=50 a3=0 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.533000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.533000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd966908 a2=28 a3=ffffcd966a38 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.533000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.533000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.533000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd966938 a2=28 a3=ffffcd966a68 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.533000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.533000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.533000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd9667e8 a2=28 a3=ffffcd966918 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.533000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.534000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.534000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd966958 a2=28 a3=ffffcd966a88 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.534000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.534000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd966938 a2=28 a3=ffffcd966a68 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.534000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.534000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd966928 a2=28 a3=ffffcd966a58 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.534000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.534000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd966958 a2=28 a3=ffffcd966a88 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.534000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.534000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd966938 a2=28 a3=ffffcd966a68 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.534000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.534000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd966958 a2=28 a3=ffffcd966a88 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.534000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.534000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd966928 a2=28 a3=ffffcd966a58 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd9669a8 a2=28 a3=ffffcd966ae8 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.535000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcd9666e0 a2=50 a3=0 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.535000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.535000 audit: BPF prog-id=17 op=LOAD Aug 12 23:58:52.535000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd9666e8 a2=94 a3=5 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.535000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.536000 audit: BPF prog-id=17 op=UNLOAD Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcd9667f0 a2=50 a3=0 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.536000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcd966938 a2=4 a3=3 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.536000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.536000 audit[4724]: AVC avc: denied { confidentiality } for pid=4724 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 12 23:58:52.536000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcd966918 a2=94 a3=6 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.536000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.537000 audit[4724]: AVC avc: denied { confidentiality } for pid=4724 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 12 23:58:52.537000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcd9660e8 a2=94 a3=83 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.537000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { perfmon } for pid=4724 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { bpf } for pid=4724 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.538000 audit[4724]: AVC avc: denied { confidentiality } for pid=4724 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 12 23:58:52.538000 audit[4724]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcd9660e8 a2=94 a3=83 items=0 ppid=4595 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.538000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit: BPF prog-id=18 op=LOAD Aug 12 23:58:52.624000 audit[4761]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc5d4a9e8 a2=98 a3=ffffc5d4a9d8 items=0 ppid=4595 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.624000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 12 23:58:52.624000 audit: BPF prog-id=18 op=UNLOAD Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit: BPF prog-id=19 op=LOAD Aug 12 23:58:52.624000 audit[4761]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc5d4a898 a2=74 a3=95 items=0 ppid=4595 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.624000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 12 23:58:52.624000 audit: BPF prog-id=19 op=UNLOAD Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.624000 audit: BPF prog-id=20 op=LOAD Aug 12 23:58:52.624000 audit[4761]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc5d4a8c8 a2=40 a3=ffffc5d4a8f8 items=0 ppid=4595 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.624000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 12 23:58:52.625000 audit: BPF prog-id=20 op=UNLOAD Aug 12 23:58:52.746803 systemd-networkd[1099]: cali72abd4fdbd7: Gained IPv6LL Aug 12 23:58:52.806399 systemd-networkd[1099]: vxlan.calico: Link UP Aug 12 23:58:52.806412 systemd-networkd[1099]: vxlan.calico: Gained carrier Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit: BPF prog-id=21 op=LOAD Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea4aa458 a2=98 a3=ffffea4aa448 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit: BPF prog-id=21 op=UNLOAD Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit: BPF prog-id=22 op=LOAD Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea4aa138 a2=74 a3=95 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit: BPF prog-id=22 op=UNLOAD Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit: BPF prog-id=23 op=LOAD Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea4aa198 a2=94 a3=2 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit: BPF prog-id=23 op=UNLOAD Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffea4aa1c8 a2=28 a3=ffffea4aa2f8 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffea4aa1f8 a2=28 a3=ffffea4aa328 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffea4aa0a8 a2=28 a3=ffffea4aa1d8 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffea4aa218 a2=28 a3=ffffea4aa348 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffea4aa1f8 a2=28 a3=ffffea4aa328 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffea4aa1e8 a2=28 a3=ffffea4aa318 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffea4aa218 a2=28 a3=ffffea4aa348 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffea4aa1f8 a2=28 a3=ffffea4aa328 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffea4aa218 a2=28 a3=ffffea4aa348 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffea4aa1e8 a2=28 a3=ffffea4aa318 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffea4aa268 a2=28 a3=ffffea4aa3a8 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.838000 audit: BPF prog-id=24 op=LOAD Aug 12 23:58:52.838000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea4aa088 a2=40 a3=ffffea4aa0b8 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.838000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.839000 audit: BPF prog-id=24 op=UNLOAD Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffea4aa0b0 a2=50 a3=0 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffea4aa0b0 a2=50 a3=0 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit: BPF prog-id=25 op=LOAD Aug 12 23:58:52.839000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea4a9818 a2=94 a3=2 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.839000 audit: BPF prog-id=25 op=UNLOAD Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { perfmon } for pid=4786 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit[4786]: AVC avc: denied { bpf } for pid=4786 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.839000 audit: BPF prog-id=26 op=LOAD Aug 12 23:58:52.839000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea4a99a8 a2=94 a3=30 items=0 ppid=4595 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 12 23:58:52.844000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.844000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.844000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.844000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.844000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.844000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.844000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.844000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.844000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.844000 audit: BPF prog-id=27 op=LOAD Aug 12 23:58:52.844000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd62bc858 a2=98 a3=ffffd62bc848 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.847000 audit: BPF prog-id=27 op=UNLOAD Aug 12 23:58:52.847000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.847000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.847000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.847000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.847000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.847000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.847000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.847000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.847000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.847000 audit: BPF prog-id=28 op=LOAD Aug 12 23:58:52.847000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd62bc4e8 a2=74 a3=95 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.847000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.848000 audit: BPF prog-id=28 op=UNLOAD Aug 12 23:58:52.848000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.848000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.848000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.848000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.848000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.848000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.848000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.848000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.848000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.848000 audit: BPF prog-id=29 op=LOAD Aug 12 23:58:52.848000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd62bc548 a2=94 a3=2 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.848000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.848000 audit: BPF prog-id=29 op=UNLOAD Aug 12 23:58:52.861410 kubelet[2127]: E0812 23:58:52.861372 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:52.870543 kubelet[2127]: E0812 23:58:52.865494 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:52.874776 kubelet[2127]: I0812 23:58:52.874722 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mxcdd" podStartSLOduration=37.874693426 podStartE2EDuration="37.874693426s" podCreationTimestamp="2025-08-12 23:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:58:52.874294836 +0000 UTC m=+42.271799587" watchObservedRunningTime="2025-08-12 23:58:52.874693426 +0000 UTC m=+42.272198177" Aug 12 23:58:52.877548 systemd-networkd[1099]: cali580531e3e15: Gained IPv6LL Aug 12 23:58:52.886000 audit[4799]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:52.886000 audit[4799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffc6fc2380 a2=0 a3=1 items=0 ppid=2235 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:52.899000 audit[4799]: NETFILTER_CFG table=nat:106 family=2 entries=37 op=nft_register_chain pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:52.899000 audit[4799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=ffffc6fc2380 a2=0 a3=1 items=0 ppid=2235 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit: BPF prog-id=30 op=LOAD Aug 12 23:58:52.954000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd62bc508 a2=40 a3=ffffd62bc538 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.954000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.954000 audit: BPF prog-id=30 op=UNLOAD Aug 12 23:58:52.954000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.954000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd62bc620 a2=50 a3=0 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.954000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.962000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.962000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd62bc578 a2=28 a3=ffffd62bc6a8 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.962000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd62bc5a8 a2=28 a3=ffffd62bc6d8 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd62bc458 a2=28 a3=ffffd62bc588 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd62bc5c8 a2=28 a3=ffffd62bc6f8 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd62bc5a8 a2=28 a3=ffffd62bc6d8 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd62bc598 a2=28 a3=ffffd62bc6c8 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd62bc5c8 a2=28 a3=ffffd62bc6f8 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd62bc5a8 a2=28 a3=ffffd62bc6d8 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd62bc5c8 a2=28 a3=ffffd62bc6f8 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd62bc598 a2=28 a3=ffffd62bc6c8 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd62bc618 a2=28 a3=ffffd62bc758 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd62bc350 a2=50 a3=0 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit: BPF prog-id=31 op=LOAD Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd62bc358 a2=94 a3=5 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit: BPF prog-id=31 op=UNLOAD Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd62bc460 a2=50 a3=0 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd62bc5a8 a2=4 a3=3 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { confidentiality } for pid=4790 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd62bc588 a2=94 a3=6 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { confidentiality } for pid=4790 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd62bbd58 a2=94 a3=83 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { perfmon } for pid=4790 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.963000 audit[4790]: AVC avc: denied { confidentiality } for pid=4790 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 12 23:58:52.963000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd62bbd58 a2=94 a3=83 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.964000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.964000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd62bd798 a2=10 a3=ffffd62bd888 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.964000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.964000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.964000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd62bd658 a2=10 a3=ffffd62bd748 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.964000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.964000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.964000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd62bd5c8 a2=10 a3=ffffd62bd748 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.964000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.964000 audit[4790]: AVC avc: denied { bpf } for pid=4790 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 12 23:58:52.964000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd62bd5c8 a2=10 a3=ffffd62bd748 items=0 ppid=4595 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:52.964000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 12 23:58:52.970000 audit: BPF prog-id=26 op=UNLOAD Aug 12 23:58:53.060000 audit[4831]: NETFILTER_CFG table=mangle:107 family=2 entries=16 op=nft_register_chain pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 12 23:58:53.060000 audit[4831]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc17dc380 a2=0 a3=ffff88263fa8 items=0 ppid=4595 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:53.060000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 12 23:58:53.070000 audit[4829]: NETFILTER_CFG table=raw:108 family=2 entries=21 op=nft_register_chain pid=4829 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 12 23:58:53.070000 audit[4829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc115dac0 a2=0 a3=ffffa01cdfa8 items=0 ppid=4595 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:53.070000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 12 23:58:53.075000 audit[4830]: NETFILTER_CFG table=nat:109 family=2 entries=15 op=nft_register_chain pid=4830 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 12 23:58:53.075000 audit[4830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffff3a63570 a2=0 a3=ffff8cf4dfa8 items=0 ppid=4595 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:53.075000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 12 23:58:53.085000 audit[4835]: NETFILTER_CFG table=filter:110 family=2 entries=321 op=nft_register_chain pid=4835 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 12 23:58:53.085000 audit[4835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=190616 a0=3 a1=fffffc8ce7d0 a2=0 a3=ffff953cafa8 items=0 ppid=4595 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:53.085000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 12 23:58:53.514751 systemd-networkd[1099]: cali0962282477f: Gained IPv6LL Aug 12 23:58:53.681567 env[1323]: time="2025-08-12T23:58:53.681508244Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:53.687473 env[1323]: time="2025-08-12T23:58:53.687423927Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:53.689433 env[1323]: time="2025-08-12T23:58:53.689378633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:53.691071 env[1323]: time="2025-08-12T23:58:53.691034477Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:53.691596 env[1323]: time="2025-08-12T23:58:53.691556716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:58:53.694702 env[1323]: time="2025-08-12T23:58:53.694660588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 12 23:58:53.695868 env[1323]: time="2025-08-12T23:58:53.695827356Z" level=info msg="CreateContainer within sandbox \"fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:58:53.710235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149068895.mount: Deactivated successfully. Aug 12 23:58:53.713817 env[1323]: time="2025-08-12T23:58:53.713737896Z" level=info msg="CreateContainer within sandbox \"fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b53c2409356164a2c745eb849559f5abbabbf052a4585f5b606b58f5ae99eb2b\"" Aug 12 23:58:53.714408 env[1323]: time="2025-08-12T23:58:53.714376864Z" level=info msg="StartContainer for \"b53c2409356164a2c745eb849559f5abbabbf052a4585f5b606b58f5ae99eb2b\"" Aug 12 23:58:53.772577 systemd-networkd[1099]: cali63886eadb58: Gained IPv6LL Aug 12 23:58:53.890118 env[1323]: time="2025-08-12T23:58:53.890074977Z" level=info msg="StartContainer for \"b53c2409356164a2c745eb849559f5abbabbf052a4585f5b606b58f5ae99eb2b\" returns successfully" Aug 12 23:58:53.899858 kubelet[2127]: E0812 23:58:53.896285 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:53.899858 kubelet[2127]: E0812 23:58:53.896533 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:53.929000 audit[4880]: NETFILTER_CFG table=filter:111 family=2 entries=12 op=nft_register_rule pid=4880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:53.929000 audit[4880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe0d25080 a2=0 a3=1 items=0 ppid=2235 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:53.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:53.953000 audit[4880]: NETFILTER_CFG table=nat:112 family=2 entries=58 op=nft_register_chain pid=4880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:53.953000 audit[4880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20628 a0=3 a1=ffffe0d25080 a2=0 a3=1 items=0 ppid=2235 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:53.953000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:54.667734 systemd-networkd[1099]: vxlan.calico: Gained IPv6LL Aug 12 23:58:54.900297 kubelet[2127]: E0812 23:58:54.900095 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:54.970000 audit[4890]: NETFILTER_CFG table=filter:113 family=2 entries=12 op=nft_register_rule pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:54.970000 audit[4890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffcc2c5f30 a2=0 a3=1 items=0 ppid=2235 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:54.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:54.976000 audit[4890]: NETFILTER_CFG table=nat:114 family=2 entries=22 op=nft_register_rule pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:54.976000 audit[4890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffcc2c5f30 a2=0 a3=1 items=0 ppid=2235 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:54.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:55.287594 kubelet[2127]: I0812 23:58:55.287536 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54ddd56b5-bk8rz" podStartSLOduration=27.229536714 podStartE2EDuration="30.287516384s" podCreationTimestamp="2025-08-12 23:58:25 +0000 UTC" firstStartedPulling="2025-08-12 23:58:50.634477553 +0000 UTC m=+40.031982304" lastFinishedPulling="2025-08-12 23:58:53.692457223 +0000 UTC m=+43.089961974" observedRunningTime="2025-08-12 23:58:53.913126542 +0000 UTC m=+43.310631253" watchObservedRunningTime="2025-08-12 23:58:55.287516384 +0000 UTC m=+44.685021135" Aug 12 23:58:55.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.49:22-10.0.0.1:43610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:55.543491 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:43610.service. Aug 12 23:58:55.546796 kernel: kauditd_printk_skb: 553 callbacks suppressed Aug 12 23:58:55.546879 kernel: audit: type=1130 audit(1755043135.543:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.49:22-10.0.0.1:43610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:55.606000 audit[4893]: USER_ACCT pid=4893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:55.606964 sshd[4893]: Accepted publickey for core from 10.0.0.1 port 43610 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:58:55.609665 kernel: audit: type=1101 audit(1755043135.606:428): pid=4893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:55.612000 audit[4893]: CRED_ACQ pid=4893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:55.615664 sshd[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:58:55.617412 kernel: audit: type=1103 audit(1755043135.612:429): pid=4893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:55.617577 kernel: audit: type=1006 audit(1755043135.613:430): pid=4893 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Aug 12 23:58:55.617602 kernel: audit: type=1300 audit(1755043135.613:430): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff885c880 a2=3 a3=1 items=0 ppid=1 pid=4893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:55.613000 audit[4893]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff885c880 a2=3 a3=1 items=0 ppid=1 pid=4893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:55.613000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:58:55.621542 kernel: audit: type=1327 audit(1755043135.613:430): proctitle=737368643A20636F7265205B707269765D Aug 12 23:58:55.620985 systemd-logind[1309]: New session 9 of user core. Aug 12 23:58:55.621878 systemd[1]: Started session-9.scope. Aug 12 23:58:55.626000 audit[4893]: USER_START pid=4893 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:55.627000 audit[4896]: CRED_ACQ pid=4896 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:55.632588 kernel: audit: type=1105 audit(1755043135.626:431): pid=4893 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:55.632789 kernel: audit: type=1103 audit(1755043135.627:432): pid=4896 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:56.017000 audit[4908]: NETFILTER_CFG table=filter:115 family=2 entries=11 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:56.017000 audit[4908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffe65ded20 a2=0 a3=1 items=0 ppid=2235 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:56.023856 kernel: audit: type=1325 audit(1755043136.017:433): table=filter:115 family=2 entries=11 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:56.023963 kernel: audit: type=1300 audit(1755043136.017:433): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffe65ded20 a2=0 a3=1 items=0 ppid=2235 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:56.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:56.028000 audit[4908]: NETFILTER_CFG table=nat:116 family=2 entries=29 op=nft_register_chain pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:56.028000 audit[4908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffe65ded20 a2=0 a3=1 items=0 ppid=2235 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:56.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:56.089766 sshd[4893]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:56.090000 audit[4893]: USER_END pid=4893 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:56.091000 audit[4893]: CRED_DISP pid=4893 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:58:56.095446 systemd-logind[1309]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:58:56.095691 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:43610.service: Deactivated successfully. Aug 12 23:58:56.096667 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:58:56.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.49:22-10.0.0.1:43610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:58:56.097250 systemd-logind[1309]: Removed session 9. Aug 12 23:58:56.107645 env[1323]: time="2025-08-12T23:58:56.107552317Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:56.110295 env[1323]: time="2025-08-12T23:58:56.110229225Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:56.113485 env[1323]: time="2025-08-12T23:58:56.113424450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:56.115465 env[1323]: time="2025-08-12T23:58:56.115119529Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:56.115826 env[1323]: time="2025-08-12T23:58:56.115796136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 12 23:58:56.118184 env[1323]: time="2025-08-12T23:58:56.118130300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 12 23:58:56.131470 env[1323]: time="2025-08-12T23:58:56.127418553Z" level=info msg="CreateContainer within sandbox \"ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 12 23:58:56.153126 env[1323]: time="2025-08-12T23:58:56.153052795Z" level=info msg="CreateContainer within sandbox \"ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9741fb46592a801d121a8107f0042c2040be5d200814b403884407293cd57100\"" Aug 12 23:58:56.153670 env[1323]: time="2025-08-12T23:58:56.153608634Z" level=info msg="StartContainer for \"9741fb46592a801d121a8107f0042c2040be5d200814b403884407293cd57100\"" Aug 12 23:58:56.255313 env[1323]: time="2025-08-12T23:58:56.255146570Z" level=info msg="StartContainer for \"9741fb46592a801d121a8107f0042c2040be5d200814b403884407293cd57100\" returns successfully" Aug 12 23:58:56.974207 kubelet[2127]: I0812 23:58:56.973887 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-645c974fb8-zw4bc" podStartSLOduration=22.542750792 podStartE2EDuration="27.973868485s" podCreationTimestamp="2025-08-12 23:58:29 +0000 UTC" firstStartedPulling="2025-08-12 23:58:50.686019658 +0000 UTC m=+40.083524409" lastFinishedPulling="2025-08-12 23:58:56.117137311 +0000 UTC m=+45.514642102" observedRunningTime="2025-08-12 23:58:56.97181366 +0000 UTC m=+46.369318411" watchObservedRunningTime="2025-08-12 23:58:56.973868485 +0000 UTC m=+46.371373236" Aug 12 23:58:58.381281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733636209.mount: Deactivated successfully. Aug 12 23:58:58.564437 kubelet[2127]: I0812 23:58:58.563857 2127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:58:58.586895 systemd[1]: run-containerd-runc-k8s.io-cbcb7c58eb7f78ed8bea814a4f05e90ff612bac1a2b27245de061a2d3ceed4ef-runc.aFo8NN.mount: Deactivated successfully. Aug 12 23:58:59.228343 env[1323]: time="2025-08-12T23:58:59.228290584Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:59.238413 env[1323]: time="2025-08-12T23:58:59.238371695Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:59.240916 env[1323]: time="2025-08-12T23:58:59.240868301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:59.243459 env[1323]: time="2025-08-12T23:58:59.243421511Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:58:59.244432 env[1323]: time="2025-08-12T23:58:59.244393656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 12 23:58:59.246353 env[1323]: time="2025-08-12T23:58:59.246321984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 12 23:58:59.246998 env[1323]: time="2025-08-12T23:58:59.246959106Z" level=info msg="CreateContainer within sandbox \"3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 12 23:58:59.266935 env[1323]: time="2025-08-12T23:58:59.266873911Z" level=info msg="CreateContainer within sandbox \"3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"63ebb84f605936612ad4ccae9079cd09981ae383a7873ecd64a09b393cc84425\"" Aug 12 23:58:59.267714 env[1323]: time="2025-08-12T23:58:59.267688085Z" level=info msg="StartContainer for \"63ebb84f605936612ad4ccae9079cd09981ae383a7873ecd64a09b393cc84425\"" Aug 12 23:58:59.342033 env[1323]: time="2025-08-12T23:58:59.341978026Z" level=info msg="StartContainer for \"63ebb84f605936612ad4ccae9079cd09981ae383a7873ecd64a09b393cc84425\" returns successfully" Aug 12 23:58:59.940663 systemd[1]: run-containerd-runc-k8s.io-63ebb84f605936612ad4ccae9079cd09981ae383a7873ecd64a09b393cc84425-runc.CaSJmw.mount: Deactivated successfully. Aug 12 23:58:59.955000 audit[5084]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=5084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:59.955000 audit[5084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffd6313090 a2=0 a3=1 items=0 ppid=2235 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:59.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:58:59.968000 audit[5084]: NETFILTER_CFG table=nat:118 family=2 entries=24 op=nft_register_rule pid=5084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:58:59.968000 audit[5084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffd6313090 a2=0 a3=1 items=0 ppid=2235 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:58:59.968000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:00.381419 systemd[1]: run-containerd-runc-k8s.io-63ebb84f605936612ad4ccae9079cd09981ae383a7873ecd64a09b393cc84425-runc.dAkjjZ.mount: Deactivated successfully. Aug 12 23:59:00.587922 env[1323]: time="2025-08-12T23:59:00.587875888Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:59:00.589479 env[1323]: time="2025-08-12T23:59:00.589446911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:59:00.593034 env[1323]: time="2025-08-12T23:59:00.592989222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:59:00.595487 env[1323]: time="2025-08-12T23:59:00.595442983Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:59:00.596200 env[1323]: time="2025-08-12T23:59:00.596161470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 12 23:59:00.598252 env[1323]: time="2025-08-12T23:59:00.598182802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:59:00.598764 env[1323]: time="2025-08-12T23:59:00.598709676Z" level=info msg="CreateContainer within sandbox \"85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 12 23:59:00.615777 env[1323]: time="2025-08-12T23:59:00.613389517Z" level=info msg="CreateContainer within sandbox \"85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"44c9d2d919f4730cd5a6a0096232faa6040309216499035704fc9de295d098eb\"" Aug 12 23:59:00.616609 env[1323]: time="2025-08-12T23:59:00.616578005Z" level=info msg="StartContainer for \"44c9d2d919f4730cd5a6a0096232faa6040309216499035704fc9de295d098eb\"" Aug 12 23:59:00.699491 env[1323]: time="2025-08-12T23:59:00.699377261Z" level=info msg="StartContainer for \"44c9d2d919f4730cd5a6a0096232faa6040309216499035704fc9de295d098eb\" returns successfully" Aug 12 23:59:00.788527 kubelet[2127]: I0812 23:59:00.788473 2127 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 12 23:59:00.790717 kubelet[2127]: I0812 23:59:00.790691 2127 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 12 23:59:00.896143 env[1323]: time="2025-08-12T23:59:00.896100809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:59:00.902475 env[1323]: time="2025-08-12T23:59:00.902417662Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:59:00.904726 env[1323]: time="2025-08-12T23:59:00.904689371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:59:00.911777 env[1323]: time="2025-08-12T23:59:00.911721111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:59:00.912896 env[1323]: time="2025-08-12T23:59:00.912290188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:59:00.920240 env[1323]: time="2025-08-12T23:59:00.920192265Z" level=info msg="CreateContainer within sandbox \"1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:59:00.943717 kubelet[2127]: I0812 23:59:00.943458 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wzhss" podStartSLOduration=21.934362079 podStartE2EDuration="31.943411024s" podCreationTimestamp="2025-08-12 23:58:29 +0000 UTC" firstStartedPulling="2025-08-12 23:58:50.588432411 +0000 UTC m=+39.985937162" lastFinishedPulling="2025-08-12 23:59:00.597481356 +0000 UTC m=+49.994986107" observedRunningTime="2025-08-12 23:59:00.942925472 +0000 UTC m=+50.340430223" watchObservedRunningTime="2025-08-12 23:59:00.943411024 +0000 UTC m=+50.340915775" Aug 12 23:59:00.944252 kubelet[2127]: I0812 23:59:00.944084 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-h599k" podStartSLOduration=24.767912206 podStartE2EDuration="32.944075067s" podCreationTimestamp="2025-08-12 23:58:28 +0000 UTC" firstStartedPulling="2025-08-12 23:58:51.069311706 +0000 UTC m=+40.466816457" lastFinishedPulling="2025-08-12 23:58:59.245474567 +0000 UTC m=+48.642979318" observedRunningTime="2025-08-12 23:58:59.930766789 +0000 UTC m=+49.328271540" watchObservedRunningTime="2025-08-12 23:59:00.944075067 +0000 UTC m=+50.341579818" Aug 12 23:59:00.950519 env[1323]: time="2025-08-12T23:59:00.950398041Z" level=info msg="CreateContainer within sandbox \"1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0f5172a72e8900c53901f8855450db2d4e34628226fa6588ae6a4f019d852299\"" Aug 12 23:59:00.952238 env[1323]: time="2025-08-12T23:59:00.952133794Z" level=info msg="StartContainer for \"0f5172a72e8900c53901f8855450db2d4e34628226fa6588ae6a4f019d852299\"" Aug 12 23:59:01.033749 env[1323]: time="2025-08-12T23:59:01.033689175Z" level=info msg="StartContainer for \"0f5172a72e8900c53901f8855450db2d4e34628226fa6588ae6a4f019d852299\" returns successfully" Aug 12 23:59:01.080000 audit[5214]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:01.081758 kernel: kauditd_printk_skb: 13 callbacks suppressed Aug 12 23:59:01.081829 kernel: audit: type=1325 audit(1755043141.080:440): table=filter:119 family=2 entries=9 op=nft_register_rule pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:01.080000 audit[5214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffec6ecfb0 a2=0 a3=1 items=0 ppid=2235 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:01.085997 kernel: audit: type=1300 audit(1755043141.080:440): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffec6ecfb0 a2=0 a3=1 items=0 ppid=2235 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:01.086082 kernel: audit: type=1327 audit(1755043141.080:440): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:01.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:01.089000 audit[5214]: NETFILTER_CFG table=nat:120 family=2 entries=31 op=nft_register_chain pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:01.089000 audit[5214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffec6ecfb0 a2=0 a3=1 items=0 ppid=2235 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:01.093477 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:43624.service. Aug 12 23:59:01.095664 kernel: audit: type=1325 audit(1755043141.089:441): table=nat:120 family=2 entries=31 op=nft_register_chain pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:01.095762 kernel: audit: type=1300 audit(1755043141.089:441): arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffec6ecfb0 a2=0 a3=1 items=0 ppid=2235 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:01.095786 kernel: audit: type=1327 audit(1755043141.089:441): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:01.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:01.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.49:22-10.0.0.1:43624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:01.102024 kernel: audit: type=1130 audit(1755043141.092:442): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.49:22-10.0.0.1:43624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:01.144000 audit[5215]: USER_ACCT pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.145004 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 43624 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:01.147690 kernel: audit: type=1101 audit(1755043141.144:443): pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.148000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.149256 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:01.152920 kernel: audit: type=1103 audit(1755043141.148:444): pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.152989 kernel: audit: type=1006 audit(1755043141.148:445): pid=5215 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Aug 12 23:59:01.148000 audit[5215]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2255ed0 a2=3 a3=1 items=0 ppid=1 pid=5215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:01.148000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:01.158076 systemd-logind[1309]: New session 10 of user core. Aug 12 23:59:01.159121 systemd[1]: Started session-10.scope. Aug 12 23:59:01.166000 audit[5215]: USER_START pid=5215 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.168000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.465172 sshd[5215]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:01.467838 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:43636.service. Aug 12 23:59:01.465000 audit[5215]: USER_END pid=5215 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.465000 audit[5215]: CRED_DISP pid=5215 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.49:22-10.0.0.1:43636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:01.473142 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:43624.service: Deactivated successfully. Aug 12 23:59:01.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.49:22-10.0.0.1:43624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:01.474293 systemd-logind[1309]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:59:01.474397 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:59:01.475174 systemd-logind[1309]: Removed session 10. Aug 12 23:59:01.515000 audit[5233]: USER_ACCT pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.516345 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 43636 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:01.516000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.517000 audit[5233]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd1dbab0 a2=3 a3=1 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:01.517000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:01.518177 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:01.525986 systemd-logind[1309]: New session 11 of user core. Aug 12 23:59:01.527881 systemd[1]: Started session-11.scope. Aug 12 23:59:01.531000 audit[5233]: USER_START pid=5233 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.533000 audit[5238]: CRED_ACQ pid=5238 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.750534 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:43644.service. Aug 12 23:59:01.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.49:22-10.0.0.1:43644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:01.753464 sshd[5233]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:01.753000 audit[5233]: USER_END pid=5233 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.754000 audit[5233]: CRED_DISP pid=5233 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.758672 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:43636.service: Deactivated successfully. Aug 12 23:59:01.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.49:22-10.0.0.1:43636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:01.759826 systemd-logind[1309]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:59:01.759887 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:59:01.760753 systemd-logind[1309]: Removed session 11. Aug 12 23:59:01.795423 sshd[5246]: Accepted publickey for core from 10.0.0.1 port 43644 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:01.794000 audit[5246]: USER_ACCT pid=5246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.796000 audit[5246]: CRED_ACQ pid=5246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.796000 audit[5246]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeaf978e0 a2=3 a3=1 items=0 ppid=1 pid=5246 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:01.796000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:01.797069 sshd[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:01.802829 systemd-logind[1309]: New session 12 of user core. Aug 12 23:59:01.803906 systemd[1]: Started session-12.scope. Aug 12 23:59:01.808000 audit[5246]: USER_START pid=5246 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.809000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.967960 sshd[5246]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:01.969000 audit[5246]: USER_END pid=5246 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.969000 audit[5246]: CRED_DISP pid=5246 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:01.971719 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:43644.service: Deactivated successfully. Aug 12 23:59:01.972799 systemd-logind[1309]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:59:01.972869 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:59:01.970000 audit[5261]: NETFILTER_CFG table=filter:121 family=2 entries=8 op=nft_register_rule pid=5261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:01.970000 audit[5261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff98c29f0 a2=0 a3=1 items=0 ppid=2235 pid=5261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:01.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:01.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.49:22-10.0.0.1:43644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:01.974091 systemd-logind[1309]: Removed session 12. Aug 12 23:59:01.977000 audit[5261]: NETFILTER_CFG table=nat:122 family=2 entries=34 op=nft_register_rule pid=5261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:01.977000 audit[5261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=fffff98c29f0 a2=0 a3=1 items=0 ppid=2235 pid=5261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:01.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:02.935136 kubelet[2127]: I0812 23:59:02.935095 2127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:59:06.969820 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:34532.service. Aug 12 23:59:06.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.49:22-10.0.0.1:34532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:06.978228 kernel: kauditd_printk_skb: 35 callbacks suppressed Aug 12 23:59:06.978405 kernel: audit: type=1130 audit(1755043146.968:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.49:22-10.0.0.1:34532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:07.025000 audit[5276]: USER_ACCT pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.027609 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 34532 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:07.029772 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:07.027000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.034700 kernel: audit: type=1101 audit(1755043147.025:472): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.034792 kernel: audit: type=1103 audit(1755043147.027:473): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.038452 kernel: audit: type=1006 audit(1755043147.027:474): pid=5276 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Aug 12 23:59:07.027000 audit[5276]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffccb80740 a2=3 a3=1 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:07.040967 systemd-logind[1309]: New session 13 of user core. Aug 12 23:59:07.042319 systemd[1]: Started session-13.scope. Aug 12 23:59:07.044047 kernel: audit: type=1300 audit(1755043147.027:474): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffccb80740 a2=3 a3=1 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:07.027000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:07.045722 kernel: audit: type=1327 audit(1755043147.027:474): proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:07.047000 audit[5276]: USER_START pid=5276 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.053071 kernel: audit: type=1105 audit(1755043147.047:475): pid=5276 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.050000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.055670 kernel: audit: type=1103 audit(1755043147.050:476): pid=5279 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.242985 sshd[5276]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:07.242000 audit[5276]: USER_END pid=5276 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.242000 audit[5276]: CRED_DISP pid=5276 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.246174 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:34532.service: Deactivated successfully. Aug 12 23:59:07.247031 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:59:07.248047 systemd-logind[1309]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:59:07.249247 kernel: audit: type=1106 audit(1755043147.242:477): pid=5276 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.249322 kernel: audit: type=1104 audit(1755043147.242:478): pid=5276 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:07.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.49:22-10.0.0.1:34532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:07.249053 systemd-logind[1309]: Removed session 13. Aug 12 23:59:10.188555 systemd[1]: run-containerd-runc-k8s.io-63ebb84f605936612ad4ccae9079cd09981ae383a7873ecd64a09b393cc84425-runc.7Ywe3z.mount: Deactivated successfully. Aug 12 23:59:10.691000 env[1323]: time="2025-08-12T23:59:10.689035035Z" level=info msg="StopPodSandbox for \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\"" Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.798 [WARNING][5321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--h599k-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f", Pod:"goldmane-58fd7646b9-h599k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali580531e3e15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.800 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.800 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" iface="eth0" netns="" Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.800 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.800 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.831 [INFO][5331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" HandleID="k8s-pod-network.214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.831 [INFO][5331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.831 [INFO][5331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.841 [WARNING][5331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" HandleID="k8s-pod-network.214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.841 [INFO][5331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" HandleID="k8s-pod-network.214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.843 [INFO][5331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:10.847966 env[1323]: 2025-08-12 23:59:10.845 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:59:10.848444 env[1323]: time="2025-08-12T23:59:10.848001063Z" level=info msg="TearDown network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\" successfully" Aug 12 23:59:10.848444 env[1323]: time="2025-08-12T23:59:10.848037785Z" level=info msg="StopPodSandbox for \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\" returns successfully" Aug 12 23:59:10.848786 env[1323]: time="2025-08-12T23:59:10.848744105Z" level=info msg="RemovePodSandbox for \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\"" Aug 12 23:59:10.848968 env[1323]: time="2025-08-12T23:59:10.848926676Z" level=info msg="Forcibly stopping sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\"" Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.888 [WARNING][5349] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--h599k-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3f867f91-2ef0-4a2e-b6c3-546b6eb2e2a8", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a9aed0bbcec284b3064d44222c0e38f0dbcf62d8ac0254029d6cf6a4aa5670f", Pod:"goldmane-58fd7646b9-h599k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali580531e3e15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.888 [INFO][5349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.889 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" iface="eth0" netns="" Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.889 [INFO][5349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.889 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.910 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" HandleID="k8s-pod-network.214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.910 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.910 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.920 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" HandleID="k8s-pod-network.214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.920 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" HandleID="k8s-pod-network.214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Workload="localhost-k8s-goldmane--58fd7646b9--h599k-eth0" Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.921 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:10.925494 env[1323]: 2025-08-12 23:59:10.923 [INFO][5349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea" Aug 12 23:59:10.926046 env[1323]: time="2025-08-12T23:59:10.926006911Z" level=info msg="TearDown network for sandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\" successfully" Aug 12 23:59:11.002712 env[1323]: time="2025-08-12T23:59:11.002655601Z" level=info msg="RemovePodSandbox \"214fd06eb112e55f4491b204175da78612a2f9307e4f552a7f46a60d38e494ea\" returns successfully" Aug 12 23:59:11.005505 env[1323]: time="2025-08-12T23:59:11.005461521Z" level=info msg="StopPodSandbox for \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\"" Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.050 [WARNING][5376] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzhss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a", Pod:"csi-node-driver-wzhss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31fced89afb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.050 [INFO][5376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.051 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" iface="eth0" netns="" Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.051 [INFO][5376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.051 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.087 [INFO][5385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" HandleID="k8s-pod-network.ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.087 [INFO][5385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.087 [INFO][5385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.097 [WARNING][5385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" HandleID="k8s-pod-network.ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.097 [INFO][5385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" HandleID="k8s-pod-network.ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.099 [INFO][5385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.102905 env[1323]: 2025-08-12 23:59:11.100 [INFO][5376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:59:11.103397 env[1323]: time="2025-08-12T23:59:11.102929997Z" level=info msg="TearDown network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\" successfully" Aug 12 23:59:11.103397 env[1323]: time="2025-08-12T23:59:11.102975039Z" level=info msg="StopPodSandbox for \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\" returns successfully" Aug 12 23:59:11.103475 env[1323]: time="2025-08-12T23:59:11.103443746Z" level=info msg="RemovePodSandbox for \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\"" Aug 12 23:59:11.103552 env[1323]: time="2025-08-12T23:59:11.103480948Z" level=info msg="Forcibly stopping sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\"" Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.148 [WARNING][5404] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzhss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c7dfd8b-39e8-4cfc-9d3f-39550100c7dc", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85d5ef471a9cd9ee5ee7144dbeed63dc12517011de41bcc7c8a506c26a897b5a", Pod:"csi-node-driver-wzhss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31fced89afb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.148 [INFO][5404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.148 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" iface="eth0" netns="" Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.148 [INFO][5404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.148 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.169 [INFO][5413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" HandleID="k8s-pod-network.ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.169 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.169 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.181 [WARNING][5413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" HandleID="k8s-pod-network.ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.181 [INFO][5413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" HandleID="k8s-pod-network.ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Workload="localhost-k8s-csi--node--driver--wzhss-eth0" Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.183 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.187418 env[1323]: 2025-08-12 23:59:11.185 [INFO][5404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f" Aug 12 23:59:11.187929 env[1323]: time="2025-08-12T23:59:11.187444534Z" level=info msg="TearDown network for sandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\" successfully" Aug 12 23:59:11.191670 env[1323]: time="2025-08-12T23:59:11.191582170Z" level=info msg="RemovePodSandbox \"ec6bd3cf4ee0601b55241b72d19b95ffc9d8c894ead940a16ffb717a6340833f\" returns successfully" Aug 12 23:59:11.192103 env[1323]: time="2025-08-12T23:59:11.192073518Z" level=info msg="StopPodSandbox for \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\"" Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.235 [WARNING][5431] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0", GenerateName:"calico-kube-controllers-645c974fb8-", Namespace:"calico-system", SelfLink:"", UID:"d9c82a29-832e-4f27-bc43-e1ba46fc34e5", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645c974fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1", Pod:"calico-kube-controllers-645c974fb8-zw4bc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1429fdf9466", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.236 [INFO][5431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.236 [INFO][5431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" iface="eth0" netns="" Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.236 [INFO][5431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.236 [INFO][5431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.261 [INFO][5440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" HandleID="k8s-pod-network.55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.261 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.261 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.270 [WARNING][5440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" HandleID="k8s-pod-network.55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.270 [INFO][5440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" HandleID="k8s-pod-network.55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.272 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.278649 env[1323]: 2025-08-12 23:59:11.274 [INFO][5431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:59:11.279101 env[1323]: time="2025-08-12T23:59:11.278619011Z" level=info msg="TearDown network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\" successfully" Aug 12 23:59:11.279101 env[1323]: time="2025-08-12T23:59:11.278670134Z" level=info msg="StopPodSandbox for \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\" returns successfully" Aug 12 23:59:11.279191 env[1323]: time="2025-08-12T23:59:11.279161002Z" level=info msg="RemovePodSandbox for \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\"" Aug 12 23:59:11.279228 env[1323]: time="2025-08-12T23:59:11.279199044Z" level=info msg="Forcibly stopping sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\"" Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.319 [WARNING][5459] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0", GenerateName:"calico-kube-controllers-645c974fb8-", Namespace:"calico-system", SelfLink:"", UID:"d9c82a29-832e-4f27-bc43-e1ba46fc34e5", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645c974fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba4466febac2cbf393f469508beb317ba7cdbe17f52f2cac679a606c00b859c1", Pod:"calico-kube-controllers-645c974fb8-zw4bc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1429fdf9466", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.320 [INFO][5459] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.320 [INFO][5459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" iface="eth0" netns="" Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.320 [INFO][5459] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.320 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.342 [INFO][5468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" HandleID="k8s-pod-network.55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.342 [INFO][5468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.342 [INFO][5468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.357 [WARNING][5468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" HandleID="k8s-pod-network.55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.357 [INFO][5468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" HandleID="k8s-pod-network.55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Workload="localhost-k8s-calico--kube--controllers--645c974fb8--zw4bc-eth0" Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.359 [INFO][5468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.363752 env[1323]: 2025-08-12 23:59:11.361 [INFO][5459] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90" Aug 12 23:59:11.364187 env[1323]: time="2025-08-12T23:59:11.363774545Z" level=info msg="TearDown network for sandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\" successfully" Aug 12 23:59:11.366894 env[1323]: time="2025-08-12T23:59:11.366848760Z" level=info msg="RemovePodSandbox \"55e3de4ce81a2dda6fcb869b91a48937e9682d0bb7e8109c7d9e5cd4bb54bd90\" returns successfully" Aug 12 23:59:11.367456 env[1323]: time="2025-08-12T23:59:11.367411592Z" level=info msg="StopPodSandbox for \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\"" Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.400 [WARNING][5486] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0", GenerateName:"calico-apiserver-54ddd56b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe2dc283-a072-4194-b3a7-efdf3c371b0b", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54ddd56b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3", Pod:"calico-apiserver-54ddd56b5-bk8rz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b6998d238a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.401 [INFO][5486] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.401 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" iface="eth0" netns="" Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.401 [INFO][5486] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.401 [INFO][5486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.422 [INFO][5495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" HandleID="k8s-pod-network.b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.422 [INFO][5495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.422 [INFO][5495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.432 [WARNING][5495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" HandleID="k8s-pod-network.b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.432 [INFO][5495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" HandleID="k8s-pod-network.b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.434 [INFO][5495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.438397 env[1323]: 2025-08-12 23:59:11.436 [INFO][5486] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:59:11.438902 env[1323]: time="2025-08-12T23:59:11.438436801Z" level=info msg="TearDown network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\" successfully" Aug 12 23:59:11.438902 env[1323]: time="2025-08-12T23:59:11.438478963Z" level=info msg="StopPodSandbox for \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\" returns successfully" Aug 12 23:59:11.439009 env[1323]: time="2025-08-12T23:59:11.438976872Z" level=info msg="RemovePodSandbox for \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\"" Aug 12 23:59:11.439059 env[1323]: time="2025-08-12T23:59:11.439016074Z" level=info msg="Forcibly stopping sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\"" Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.483 [WARNING][5514] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0", GenerateName:"calico-apiserver-54ddd56b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe2dc283-a072-4194-b3a7-efdf3c371b0b", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54ddd56b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa9a5af1aab14cfde7a6a5b31193004706363387f6619804aacbd8eaba2ab3b3", Pod:"calico-apiserver-54ddd56b5-bk8rz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b6998d238a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.483 [INFO][5514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.483 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" iface="eth0" netns="" Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.483 [INFO][5514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.483 [INFO][5514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.506 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" HandleID="k8s-pod-network.b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.507 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.507 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.519 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" HandleID="k8s-pod-network.b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.519 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" HandleID="k8s-pod-network.b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Workload="localhost-k8s-calico--apiserver--54ddd56b5--bk8rz-eth0" Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.521 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.526318 env[1323]: 2025-08-12 23:59:11.524 [INFO][5514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316" Aug 12 23:59:11.526955 env[1323]: time="2025-08-12T23:59:11.526349492Z" level=info msg="TearDown network for sandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\" successfully" Aug 12 23:59:11.530199 env[1323]: time="2025-08-12T23:59:11.530028382Z" level=info msg="RemovePodSandbox \"b10143b62de339c72fc6af7721d18c5c8d64d4c8a67ebf302e7d745994ba0316\" returns successfully" Aug 12 23:59:11.530685 env[1323]: time="2025-08-12T23:59:11.530508929Z" level=info msg="StopPodSandbox for \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\"" Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.569 [WARNING][5541] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0", GenerateName:"calico-apiserver-54ddd56b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4410b28d-7c64-4d83-b0dc-3486564fba4c", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54ddd56b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c", Pod:"calico-apiserver-54ddd56b5-vlgsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0962282477f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.570 [INFO][5541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.570 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" iface="eth0" netns="" Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.570 [INFO][5541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.570 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.595 [INFO][5551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" HandleID="k8s-pod-network.4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.595 [INFO][5551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.596 [INFO][5551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.605 [WARNING][5551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" HandleID="k8s-pod-network.4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.606 [INFO][5551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" HandleID="k8s-pod-network.4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.608 [INFO][5551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.612113 env[1323]: 2025-08-12 23:59:11.610 [INFO][5541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:59:11.612569 env[1323]: time="2025-08-12T23:59:11.612133622Z" level=info msg="TearDown network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\" successfully" Aug 12 23:59:11.612569 env[1323]: time="2025-08-12T23:59:11.612164623Z" level=info msg="StopPodSandbox for \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\" returns successfully" Aug 12 23:59:11.612885 env[1323]: time="2025-08-12T23:59:11.612853383Z" level=info msg="RemovePodSandbox for \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\"" Aug 12 23:59:11.613011 env[1323]: time="2025-08-12T23:59:11.612972309Z" level=info msg="Forcibly stopping sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\"" Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.658 [WARNING][5569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0", GenerateName:"calico-apiserver-54ddd56b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4410b28d-7c64-4d83-b0dc-3486564fba4c", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54ddd56b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1946a6d1b0f1ca89b9f606fa7b0460b6a5efa33db8b0eccb9bb4380dbe8e3e5c", Pod:"calico-apiserver-54ddd56b5-vlgsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0962282477f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.658 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.658 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" iface="eth0" netns="" Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.658 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.658 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.681 [INFO][5578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" HandleID="k8s-pod-network.4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.681 [INFO][5578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.681 [INFO][5578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.690 [WARNING][5578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" HandleID="k8s-pod-network.4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.690 [INFO][5578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" HandleID="k8s-pod-network.4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Workload="localhost-k8s-calico--apiserver--54ddd56b5--vlgsf-eth0" Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.692 [INFO][5578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.696977 env[1323]: 2025-08-12 23:59:11.694 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474" Aug 12 23:59:11.697618 env[1323]: time="2025-08-12T23:59:11.697015580Z" level=info msg="TearDown network for sandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\" successfully" Aug 12 23:59:11.726593 env[1323]: time="2025-08-12T23:59:11.726539903Z" level=info msg="RemovePodSandbox \"4cd96a94b9e51c90df6197a3583af4405303c4e083d02e51b3d2f8acab925474\" returns successfully" Aug 12 23:59:11.727107 env[1323]: time="2025-08-12T23:59:11.727077974Z" level=info msg="StopPodSandbox for \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\"" Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.780 [WARNING][5595] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" WorkloadEndpoint="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.781 [INFO][5595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.781 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" iface="eth0" netns="" Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.781 [INFO][5595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.781 [INFO][5595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.803 [INFO][5604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" HandleID="k8s-pod-network.fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Workload="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.803 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.803 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.812 [WARNING][5604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" HandleID="k8s-pod-network.fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Workload="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.812 [INFO][5604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" HandleID="k8s-pod-network.fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Workload="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.814 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.820157 env[1323]: 2025-08-12 23:59:11.816 [INFO][5595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:59:11.825230 env[1323]: time="2025-08-12T23:59:11.820560902Z" level=info msg="TearDown network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\" successfully" Aug 12 23:59:11.825230 env[1323]: time="2025-08-12T23:59:11.820594064Z" level=info msg="StopPodSandbox for \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\" returns successfully" Aug 12 23:59:11.825230 env[1323]: time="2025-08-12T23:59:11.822136912Z" level=info msg="RemovePodSandbox for \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\"" Aug 12 23:59:11.825230 env[1323]: time="2025-08-12T23:59:11.822184395Z" level=info msg="Forcibly stopping sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\"" Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.871 [WARNING][5622] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" WorkloadEndpoint="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.871 [INFO][5622] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.871 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" iface="eth0" netns="" Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.871 [INFO][5622] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.871 [INFO][5622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.892 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" HandleID="k8s-pod-network.fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Workload="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.892 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.892 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.902 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" HandleID="k8s-pod-network.fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Workload="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.902 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" HandleID="k8s-pod-network.fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Workload="localhost-k8s-whisker--6f6f6d7688--8r2qj-eth0" Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.904 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:11.908756 env[1323]: 2025-08-12 23:59:11.906 [INFO][5622] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2" Aug 12 23:59:11.909310 env[1323]: time="2025-08-12T23:59:11.909264798Z" level=info msg="TearDown network for sandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\" successfully" Aug 12 23:59:11.912999 env[1323]: time="2025-08-12T23:59:11.912949368Z" level=info msg="RemovePodSandbox \"fb8e8d07fd5135ed0437d79bb0260c0d3c04d408017494b1f48272e3fb6379a2\" returns successfully" Aug 12 23:59:11.913645 env[1323]: time="2025-08-12T23:59:11.913604766Z" level=info msg="StopPodSandbox for \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\"" Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:11.972 [WARNING][5650] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e2d94074-07d3-4e8f-bed7-18c1079c94eb", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62", Pod:"coredns-7c65d6cfc9-66lv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72abd4fdbd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:11.972 [INFO][5650] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:11.973 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" iface="eth0" netns="" Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:11.973 [INFO][5650] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:11.973 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:11.996 [INFO][5659] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" HandleID="k8s-pod-network.fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:11.996 [INFO][5659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:11.997 [INFO][5659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:12.006 [WARNING][5659] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" HandleID="k8s-pod-network.fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:12.006 [INFO][5659] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" HandleID="k8s-pod-network.fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:12.008 [INFO][5659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:12.012544 env[1323]: 2025-08-12 23:59:12.010 [INFO][5650] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:59:12.013242 env[1323]: time="2025-08-12T23:59:12.012578841Z" level=info msg="TearDown network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\" successfully" Aug 12 23:59:12.013242 env[1323]: time="2025-08-12T23:59:12.012612443Z" level=info msg="StopPodSandbox for \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\" returns successfully" Aug 12 23:59:12.013450 env[1323]: time="2025-08-12T23:59:12.013419569Z" level=info msg="RemovePodSandbox for \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\"" Aug 12 23:59:12.013720 env[1323]: time="2025-08-12T23:59:12.013669303Z" level=info msg="Forcibly stopping sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\"" Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.053 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e2d94074-07d3-4e8f-bed7-18c1079c94eb", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d2985a6dbb0ec6626ebd31e88847d137aad4a4a41306581285aadab4a087b62", Pod:"coredns-7c65d6cfc9-66lv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72abd4fdbd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.053 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.053 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" iface="eth0" netns="" Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.053 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.053 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.073 [INFO][5687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" HandleID="k8s-pod-network.fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.074 [INFO][5687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.074 [INFO][5687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.083 [WARNING][5687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" HandleID="k8s-pod-network.fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.083 [INFO][5687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" HandleID="k8s-pod-network.fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--66lv2-eth0" Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.085 [INFO][5687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:12.089653 env[1323]: 2025-08-12 23:59:12.087 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4" Aug 12 23:59:12.090191 env[1323]: time="2025-08-12T23:59:12.090143623Z" level=info msg="TearDown network for sandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\" successfully" Aug 12 23:59:12.093729 env[1323]: time="2025-08-12T23:59:12.093687463Z" level=info msg="RemovePodSandbox \"fcfbd999bf434926c64ec870878a743d8c0a2a6da204d5526eb0d7dac4a40ad4\" returns successfully" Aug 12 23:59:12.094337 env[1323]: time="2025-08-12T23:59:12.094307578Z" level=info msg="StopPodSandbox for \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\"" Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.130 [WARNING][5705] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9dc08e5e-ae34-4c36-9f26-39270357d1c4", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e", Pod:"coredns-7c65d6cfc9-mxcdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63886eadb58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.130 [INFO][5705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.131 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" iface="eth0" netns="" Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.131 [INFO][5705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.131 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.163 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" HandleID="k8s-pod-network.3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.164 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.164 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.173 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" HandleID="k8s-pod-network.3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.173 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" HandleID="k8s-pod-network.3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.175 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:12.180399 env[1323]: 2025-08-12 23:59:12.177 [INFO][5705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:59:12.180399 env[1323]: time="2025-08-12T23:59:12.180167029Z" level=info msg="TearDown network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\" successfully" Aug 12 23:59:12.180399 env[1323]: time="2025-08-12T23:59:12.180200911Z" level=info msg="StopPodSandbox for \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\" returns successfully" Aug 12 23:59:12.181264 env[1323]: time="2025-08-12T23:59:12.181219128Z" level=info msg="RemovePodSandbox for \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\"" Aug 12 23:59:12.181417 env[1323]: time="2025-08-12T23:59:12.181375217Z" level=info msg="Forcibly stopping sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\"" Aug 12 23:59:12.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.49:22-10.0.0.1:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:12.254510 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:34538.service. Aug 12 23:59:12.255822 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 12 23:59:12.255899 kernel: audit: type=1130 audit(1755043152.254:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.49:22-10.0.0.1:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.222 [WARNING][5732] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9dc08e5e-ae34-4c36-9f26-39270357d1c4", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"107d5d68d782f336105e5928fc628c4fee74617610dafc1958ce238770f6a83e", Pod:"coredns-7c65d6cfc9-mxcdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63886eadb58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.222 [INFO][5732] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.223 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" iface="eth0" netns="" Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.223 [INFO][5732] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.223 [INFO][5732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.243 [INFO][5741] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" HandleID="k8s-pod-network.3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.244 [INFO][5741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.244 [INFO][5741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.255 [WARNING][5741] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" HandleID="k8s-pod-network.3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.255 [INFO][5741] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" HandleID="k8s-pod-network.3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Workload="localhost-k8s-coredns--7c65d6cfc9--mxcdd-eth0" Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.258 [INFO][5741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:12.270369 env[1323]: 2025-08-12 23:59:12.262 [INFO][5732] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134" Aug 12 23:59:12.270836 env[1323]: time="2025-08-12T23:59:12.270401446Z" level=info msg="TearDown network for sandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\" successfully" Aug 12 23:59:12.273744 env[1323]: time="2025-08-12T23:59:12.273704273Z" level=info msg="RemovePodSandbox \"3a806e5c207832a13b581814d042d0d550a196e48bebc93814bb138b1bd65134\" returns successfully" Aug 12 23:59:12.302000 audit[5748]: USER_ACCT pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.303820 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 34538 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:12.304821 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:12.303000 audit[5748]: CRED_ACQ pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.309028 kernel: audit: type=1101 audit(1755043152.302:481): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.309115 kernel: audit: type=1103 audit(1755043152.303:482): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.309204 kernel: audit: type=1006 audit(1755043152.303:483): pid=5748 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Aug 12 23:59:12.303000 audit[5748]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4a60b70 a2=3 a3=1 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:12.311731 systemd-logind[1309]: New session 14 of user core. Aug 12 23:59:12.312576 systemd[1]: Started session-14.scope. Aug 12 23:59:12.313585 kernel: audit: type=1300 audit(1755043152.303:483): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4a60b70 a2=3 a3=1 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:12.313648 kernel: audit: type=1327 audit(1755043152.303:483): proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:12.303000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:12.316000 audit[5748]: USER_START pid=5748 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.317000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.321744 kernel: audit: type=1105 audit(1755043152.316:484): pid=5748 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.321829 kernel: audit: type=1103 audit(1755043152.317:485): pid=5751 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.496653 sshd[5748]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:12.497000 audit[5748]: USER_END pid=5748 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.500449 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:34538.service: Deactivated successfully. Aug 12 23:59:12.497000 audit[5748]: CRED_DISP pid=5748 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.501507 systemd-logind[1309]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:59:12.501575 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:59:12.502472 systemd-logind[1309]: Removed session 14. Aug 12 23:59:12.503127 kernel: audit: type=1106 audit(1755043152.497:486): pid=5748 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.503211 kernel: audit: type=1104 audit(1755043152.497:487): pid=5748 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:12.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.49:22-10.0.0.1:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:14.045200 kubelet[2127]: I0812 23:59:14.044231 2127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:59:14.080154 kubelet[2127]: I0812 23:59:14.078726 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54ddd56b5-vlgsf" podStartSLOduration=40.475303857 podStartE2EDuration="49.078709096s" podCreationTimestamp="2025-08-12 23:58:25 +0000 UTC" firstStartedPulling="2025-08-12 23:58:52.31058106 +0000 UTC m=+41.708085811" lastFinishedPulling="2025-08-12 23:59:00.913986299 +0000 UTC m=+50.311491050" observedRunningTime="2025-08-12 23:59:01.948883812 +0000 UTC m=+51.346388563" watchObservedRunningTime="2025-08-12 23:59:14.078709096 +0000 UTC m=+63.476213847" Aug 12 23:59:14.095000 audit[5792]: NETFILTER_CFG table=filter:123 family=2 entries=8 op=nft_register_rule pid=5792 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:14.095000 audit[5792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff59fb5c0 a2=0 a3=1 items=0 ppid=2235 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:14.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:14.105000 audit[5792]: NETFILTER_CFG table=nat:124 family=2 entries=38 op=nft_register_chain pid=5792 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:14.105000 audit[5792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=fffff59fb5c0 a2=0 a3=1 items=0 ppid=2235 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:14.105000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:17.502100 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:40752.service. Aug 12 23:59:17.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.49:22-10.0.0.1:40752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:17.503013 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 12 23:59:17.503053 kernel: audit: type=1130 audit(1755043157.501:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.49:22-10.0.0.1:40752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:17.551000 audit[5795]: USER_ACCT pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.552674 sshd[5795]: Accepted publickey for core from 10.0.0.1 port 40752 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:17.555695 kernel: audit: type=1101 audit(1755043157.551:492): pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.556000 audit[5795]: CRED_ACQ pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.557271 sshd[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:17.561186 kernel: audit: type=1103 audit(1755043157.556:493): pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.561270 kernel: audit: type=1006 audit(1755043157.556:494): pid=5795 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Aug 12 23:59:17.561310 kernel: audit: type=1300 audit(1755043157.556:494): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe5aaaf0 a2=3 a3=1 items=0 ppid=1 pid=5795 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:17.556000 audit[5795]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe5aaaf0 a2=3 a3=1 items=0 ppid=1 pid=5795 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:17.562276 systemd-logind[1309]: New session 15 of user core. Aug 12 23:59:17.563170 systemd[1]: Started session-15.scope. Aug 12 23:59:17.556000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:17.565612 kernel: audit: type=1327 audit(1755043157.556:494): proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:17.571000 audit[5795]: USER_START pid=5795 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.574000 audit[5798]: CRED_ACQ pid=5798 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.578444 kernel: audit: type=1105 audit(1755043157.571:495): pid=5795 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.578565 kernel: audit: type=1103 audit(1755043157.574:496): pid=5798 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.705324 kubelet[2127]: E0812 23:59:17.705247 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:17.764294 sshd[5795]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:17.766000 audit[5795]: USER_END pid=5795 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.770234 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:40752.service: Deactivated successfully. Aug 12 23:59:17.766000 audit[5795]: CRED_DISP pid=5795 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.774459 systemd-logind[1309]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:59:17.774829 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:59:17.775243 kernel: audit: type=1106 audit(1755043157.766:497): pid=5795 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.775529 kernel: audit: type=1104 audit(1755043157.766:498): pid=5795 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:17.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.49:22-10.0.0.1:40752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:17.782901 systemd-logind[1309]: Removed session 15. Aug 12 23:59:22.768707 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:44378.service. Aug 12 23:59:22.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.49:22-10.0.0.1:44378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:22.769804 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 12 23:59:22.769883 kernel: audit: type=1130 audit(1755043162.767:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.49:22-10.0.0.1:44378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:22.820000 audit[5809]: USER_ACCT pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:22.822154 sshd[5809]: Accepted publickey for core from 10.0.0.1 port 44378 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:22.824658 kernel: audit: type=1101 audit(1755043162.820:501): pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:22.823000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:22.828347 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:22.829271 kernel: audit: type=1103 audit(1755043162.823:502): pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:22.829352 kernel: audit: type=1006 audit(1755043162.823:503): pid=5809 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Aug 12 23:59:22.823000 audit[5809]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2713180 a2=3 a3=1 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:22.832192 kernel: audit: type=1300 audit(1755043162.823:503): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2713180 a2=3 a3=1 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:22.832270 kernel: audit: type=1327 audit(1755043162.823:503): proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:22.823000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:22.835809 systemd-logind[1309]: New session 16 of user core. Aug 12 23:59:22.836660 systemd[1]: Started session-16.scope. Aug 12 23:59:22.840000 audit[5809]: USER_START pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:22.845657 kernel: audit: type=1105 audit(1755043162.840:504): pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:22.844000 audit[5812]: CRED_ACQ pid=5812 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:22.848700 kernel: audit: type=1103 audit(1755043162.844:505): pid=5812 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.142193 sshd[5809]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:23.145060 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:44386.service. Aug 12 23:59:23.142000 audit[5809]: USER_END pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.142000 audit[5809]: CRED_DISP pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.150318 kernel: audit: type=1106 audit(1755043163.142:506): pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.150439 kernel: audit: type=1104 audit(1755043163.142:507): pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.150780 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:44378.service: Deactivated successfully. Aug 12 23:59:23.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.49:22-10.0.0.1:44386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:23.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.49:22-10.0.0.1:44378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:23.151964 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:59:23.152085 systemd-logind[1309]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:59:23.153024 systemd-logind[1309]: Removed session 16. Aug 12 23:59:23.189000 audit[5821]: USER_ACCT pid=5821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.191402 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 44386 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:23.191000 audit[5821]: CRED_ACQ pid=5821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.191000 audit[5821]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0147220 a2=3 a3=1 items=0 ppid=1 pid=5821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:23.191000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:23.193416 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:23.202740 systemd-logind[1309]: New session 17 of user core. Aug 12 23:59:23.203489 systemd[1]: Started session-17.scope. Aug 12 23:59:23.208000 audit[5821]: USER_START pid=5821 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.210000 audit[5826]: CRED_ACQ pid=5826 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.471373 sshd[5821]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:23.474898 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:44400.service. Aug 12 23:59:23.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.49:22-10.0.0.1:44400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:23.474000 audit[5821]: USER_END pid=5821 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.474000 audit[5821]: CRED_DISP pid=5821 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.478795 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:44386.service: Deactivated successfully. Aug 12 23:59:23.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.49:22-10.0.0.1:44386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:23.480376 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:59:23.480797 systemd-logind[1309]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:59:23.481595 systemd-logind[1309]: Removed session 17. Aug 12 23:59:23.529000 audit[5834]: USER_ACCT pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.531784 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 44400 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:23.531000 audit[5834]: CRED_ACQ pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.531000 audit[5834]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdfcfd970 a2=3 a3=1 items=0 ppid=1 pid=5834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:23.531000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:23.533588 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:23.538464 systemd[1]: Started session-18.scope. Aug 12 23:59:23.538678 systemd-logind[1309]: New session 18 of user core. Aug 12 23:59:23.541000 audit[5834]: USER_START pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:23.543000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.302000 audit[5874]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5874 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:25.302000 audit[5874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=fffff1455120 a2=0 a3=1 items=0 ppid=2235 pid=5874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:25.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:25.308833 sshd[5834]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:25.308000 audit[5834]: USER_END pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.308000 audit[5834]: CRED_DISP pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.311467 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:44406.service. Aug 12 23:59:25.312051 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:44400.service: Deactivated successfully. Aug 12 23:59:25.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.49:22-10.0.0.1:44406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:25.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.49:22-10.0.0.1:44400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:25.313246 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:59:25.314051 systemd-logind[1309]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:59:25.316000 audit[5874]: NETFILTER_CFG table=nat:126 family=2 entries=26 op=nft_register_rule pid=5874 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:25.316000 audit[5874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=fffff1455120 a2=0 a3=1 items=0 ppid=2235 pid=5874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:25.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:25.319455 systemd-logind[1309]: Removed session 18. Aug 12 23:59:25.342000 audit[5880]: NETFILTER_CFG table=filter:127 family=2 entries=32 op=nft_register_rule pid=5880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:25.342000 audit[5880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe4646b20 a2=0 a3=1 items=0 ppid=2235 pid=5880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:25.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:25.356000 audit[5880]: NETFILTER_CFG table=nat:128 family=2 entries=26 op=nft_register_rule pid=5880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:25.356000 audit[5880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe4646b20 a2=0 a3=1 items=0 ppid=2235 pid=5880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:25.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:25.358000 audit[5875]: USER_ACCT pid=5875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.360291 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 44406 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:25.360000 audit[5875]: CRED_ACQ pid=5875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.360000 audit[5875]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdabf6c90 a2=3 a3=1 items=0 ppid=1 pid=5875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:25.360000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:25.362414 sshd[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:25.366806 systemd-logind[1309]: New session 19 of user core. Aug 12 23:59:25.367232 systemd[1]: Started session-19.scope. Aug 12 23:59:25.369000 audit[5875]: USER_START pid=5875 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.371000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.825649 sshd[5875]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:25.827154 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:44408.service. Aug 12 23:59:25.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.49:22-10.0.0.1:44408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:25.826000 audit[5875]: USER_END pid=5875 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.827000 audit[5875]: CRED_DISP pid=5875 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.830618 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:44406.service: Deactivated successfully. Aug 12 23:59:25.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.49:22-10.0.0.1:44406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:25.831771 systemd-logind[1309]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:59:25.831823 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:59:25.833068 systemd-logind[1309]: Removed session 19. Aug 12 23:59:25.880000 audit[5890]: USER_ACCT pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.882029 sshd[5890]: Accepted publickey for core from 10.0.0.1 port 44408 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:25.881000 audit[5890]: CRED_ACQ pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.881000 audit[5890]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6739980 a2=3 a3=1 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:25.881000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:25.883382 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:25.887212 systemd-logind[1309]: New session 20 of user core. Aug 12 23:59:25.888067 systemd[1]: Started session-20.scope. Aug 12 23:59:25.891000 audit[5890]: USER_START pid=5890 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:25.893000 audit[5895]: CRED_ACQ pid=5895 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:26.030841 sshd[5890]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:26.030000 audit[5890]: USER_END pid=5890 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:26.030000 audit[5890]: CRED_DISP pid=5890 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:26.034051 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:44408.service: Deactivated successfully. Aug 12 23:59:26.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.49:22-10.0.0.1:44408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:26.035016 systemd-logind[1309]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:59:26.035079 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:59:26.036238 systemd-logind[1309]: Removed session 20. Aug 12 23:59:26.706304 kubelet[2127]: E0812 23:59:26.706263 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:28.586683 systemd[1]: run-containerd-runc-k8s.io-cbcb7c58eb7f78ed8bea814a4f05e90ff612bac1a2b27245de061a2d3ceed4ef-runc.OOguka.mount: Deactivated successfully. Aug 12 23:59:28.706326 kubelet[2127]: E0812 23:59:28.706280 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:30.940241 kernel: kauditd_printk_skb: 57 callbacks suppressed Aug 12 23:59:30.940390 kernel: audit: type=1325 audit(1755043170.935:549): table=filter:129 family=2 entries=20 op=nft_register_rule pid=5929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:30.935000 audit[5929]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:30.935000 audit[5929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffeebc77e0 a2=0 a3=1 items=0 ppid=2235 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:30.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:30.946334 kernel: audit: type=1300 audit(1755043170.935:549): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffeebc77e0 a2=0 a3=1 items=0 ppid=2235 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:30.946428 kernel: audit: type=1327 audit(1755043170.935:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:30.948000 audit[5929]: NETFILTER_CFG table=nat:130 family=2 entries=110 op=nft_register_chain pid=5929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:30.948000 audit[5929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffeebc77e0 a2=0 a3=1 items=0 ppid=2235 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:30.956042 kernel: audit: type=1325 audit(1755043170.948:550): table=nat:130 family=2 entries=110 op=nft_register_chain pid=5929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 12 23:59:30.956135 kernel: audit: type=1300 audit(1755043170.948:550): arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffeebc77e0 a2=0 a3=1 items=0 ppid=2235 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:30.956156 kernel: audit: type=1327 audit(1755043170.948:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:30.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 12 23:59:31.033853 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:44424.service. Aug 12 23:59:31.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.49:22-10.0.0.1:44424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:31.036657 kernel: audit: type=1130 audit(1755043171.032:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.49:22-10.0.0.1:44424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:31.080000 audit[5931]: USER_ACCT pid=5931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:31.082701 sshd[5931]: Accepted publickey for core from 10.0.0.1 port 44424 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:31.085647 kernel: audit: type=1101 audit(1755043171.080:552): pid=5931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:31.084000 audit[5931]: CRED_ACQ pid=5931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:31.086952 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:31.091223 kernel: audit: type=1103 audit(1755043171.084:553): pid=5931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:31.091317 kernel: audit: type=1006 audit(1755043171.084:554): pid=5931 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Aug 12 23:59:31.095801 systemd[1]: Started session-21.scope. Aug 12 23:59:31.096275 systemd-logind[1309]: New session 21 of user core. Aug 12 23:59:31.084000 audit[5931]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc09c6d60 a2=3 a3=1 items=0 ppid=1 pid=5931 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:31.084000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:31.100000 audit[5931]: USER_START pid=5931 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:31.102000 audit[5934]: CRED_ACQ pid=5934 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:31.252174 sshd[5931]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:31.251000 audit[5931]: USER_END pid=5931 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:31.252000 audit[5931]: CRED_DISP pid=5931 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:31.256736 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:44424.service: Deactivated successfully. Aug 12 23:59:31.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.49:22-10.0.0.1:44424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:31.258184 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:59:31.258790 systemd-logind[1309]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:59:31.259764 systemd-logind[1309]: Removed session 21. Aug 12 23:59:33.706726 kubelet[2127]: E0812 23:59:33.706690 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:36.257307 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:58446.service. Aug 12 23:59:36.260771 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 12 23:59:36.260876 kernel: audit: type=1130 audit(1755043176.256:560): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.49:22-10.0.0.1:58446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:36.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.49:22-10.0.0.1:58446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:36.328275 kernel: audit: type=1101 audit(1755043176.322:561): pid=5953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.328373 kernel: audit: type=1103 audit(1755043176.323:562): pid=5953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.330303 kernel: audit: type=1006 audit(1755043176.323:563): pid=5953 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Aug 12 23:59:36.322000 audit[5953]: USER_ACCT pid=5953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.323000 audit[5953]: CRED_ACQ pid=5953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.330465 sshd[5953]: Accepted publickey for core from 10.0.0.1 port 58446 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:36.324840 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:36.330048 systemd[1]: Started session-22.scope. Aug 12 23:59:36.331054 systemd-logind[1309]: New session 22 of user core. Aug 12 23:59:36.334137 kernel: audit: type=1300 audit(1755043176.323:563): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2e91900 a2=3 a3=1 items=0 ppid=1 pid=5953 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:36.323000 audit[5953]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2e91900 a2=3 a3=1 items=0 ppid=1 pid=5953 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:36.323000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:36.337731 kernel: audit: type=1327 audit(1755043176.323:563): proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:36.335000 audit[5953]: USER_START pid=5953 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.341089 kernel: audit: type=1105 audit(1755043176.335:564): pid=5953 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.338000 audit[5956]: CRED_ACQ pid=5956 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.349262 kernel: audit: type=1103 audit(1755043176.338:565): pid=5956 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.472126 sshd[5953]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:36.478607 kernel: audit: type=1106 audit(1755043176.472:566): pid=5953 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.478710 kernel: audit: type=1104 audit(1755043176.472:567): pid=5953 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.472000 audit[5953]: USER_END pid=5953 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.472000 audit[5953]: CRED_DISP pid=5953 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:36.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.49:22-10.0.0.1:58446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:36.474985 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:58446.service: Deactivated successfully. Aug 12 23:59:36.476069 systemd-logind[1309]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:59:36.476116 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:59:36.476880 systemd-logind[1309]: Removed session 22. Aug 12 23:59:41.476936 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:58448.service. Aug 12 23:59:41.480697 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 12 23:59:41.480894 kernel: audit: type=1130 audit(1755043181.476:569): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.49:22-10.0.0.1:58448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:41.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.49:22-10.0.0.1:58448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:41.524000 audit[5991]: USER_ACCT pid=5991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.527599 sshd[5991]: Accepted publickey for core from 10.0.0.1 port 58448 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:41.527903 kernel: audit: type=1101 audit(1755043181.524:570): pid=5991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.533264 kernel: audit: type=1103 audit(1755043181.528:571): pid=5991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.533343 kernel: audit: type=1006 audit(1755043181.528:572): pid=5991 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Aug 12 23:59:41.528000 audit[5991]: CRED_ACQ pid=5991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.529502 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:41.536830 kernel: audit: type=1300 audit(1755043181.528:572): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8d064b0 a2=3 a3=1 items=0 ppid=1 pid=5991 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:41.528000 audit[5991]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8d064b0 a2=3 a3=1 items=0 ppid=1 pid=5991 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:41.534373 systemd[1]: Started session-23.scope. Aug 12 23:59:41.534740 systemd-logind[1309]: New session 23 of user core. Aug 12 23:59:41.528000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:41.538670 kernel: audit: type=1327 audit(1755043181.528:572): proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:41.541000 audit[5991]: USER_START pid=5991 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.543000 audit[5994]: CRED_ACQ pid=5994 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.549853 kernel: audit: type=1105 audit(1755043181.541:573): pid=5991 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.554654 kernel: audit: type=1103 audit(1755043181.543:574): pid=5994 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.724412 sshd[5991]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:41.725000 audit[5991]: USER_END pid=5991 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.731246 kernel: audit: type=1106 audit(1755043181.725:575): pid=5991 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.731281 kernel: audit: type=1104 audit(1755043181.725:576): pid=5991 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.725000 audit[5991]: CRED_DISP pid=5991 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:41.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.49:22-10.0.0.1:58448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:41.728360 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:58448.service: Deactivated successfully. Aug 12 23:59:41.729388 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:59:41.731694 systemd-logind[1309]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:59:41.732794 systemd-logind[1309]: Removed session 23. Aug 12 23:59:46.727838 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:39136.service. Aug 12 23:59:46.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.49:22-10.0.0.1:39136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:46.731957 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 12 23:59:46.732079 kernel: audit: type=1130 audit(1755043186.727:578): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.49:22-10.0.0.1:39136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:46.773000 audit[6028]: USER_ACCT pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.777208 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 39136 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:59:46.777677 kernel: audit: type=1101 audit(1755043186.773:579): pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.779000 audit[6028]: CRED_ACQ pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.782649 kernel: audit: type=1103 audit(1755043186.779:580): pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.782723 kernel: audit: type=1006 audit(1755043186.781:581): pid=6028 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Aug 12 23:59:46.786159 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:59:46.781000 audit[6028]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea67b800 a2=3 a3=1 items=0 ppid=1 pid=6028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:46.781000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:46.793974 kernel: audit: type=1300 audit(1755043186.781:581): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea67b800 a2=3 a3=1 items=0 ppid=1 pid=6028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:59:46.794027 kernel: audit: type=1327 audit(1755043186.781:581): proctitle=737368643A20636F7265205B707269765D Aug 12 23:59:46.801157 systemd-logind[1309]: New session 24 of user core. Aug 12 23:59:46.803783 systemd[1]: Started session-24.scope. Aug 12 23:59:46.816000 audit[6028]: USER_START pid=6028 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.817000 audit[6031]: CRED_ACQ pid=6031 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.829269 kernel: audit: type=1105 audit(1755043186.816:582): pid=6028 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.829418 kernel: audit: type=1103 audit(1755043186.817:583): pid=6031 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.959142 sshd[6028]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:46.959000 audit[6028]: USER_END pid=6028 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.963656 kernel: audit: type=1106 audit(1755043186.959:584): pid=6028 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.963000 audit[6028]: CRED_DISP pid=6028 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.966683 kernel: audit: type=1104 audit(1755043186.963:585): pid=6028 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 12 23:59:46.966797 systemd-logind[1309]: Session 24 logged out. Waiting for processes to exit. Aug 12 23:59:46.967318 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:39136.service: Deactivated successfully. Aug 12 23:59:46.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.49:22-10.0.0.1:39136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:59:46.968349 systemd[1]: session-24.scope: Deactivated successfully. Aug 12 23:59:46.969599 systemd-logind[1309]: Removed session 24.