Sep 9 00:42:27.700365 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:42:27.700384 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Sep 8 23:23:23 -00 2025 Sep 9 00:42:27.700392 kernel: efi: EFI v2.70 by EDK II Sep 9 00:42:27.700398 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 9 00:42:27.700403 kernel: random: crng init done Sep 9 00:42:27.700408 kernel: ACPI: Early table checksum verification disabled Sep 9 00:42:27.700414 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 9 00:42:27.700421 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:42:27.700427 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:42:27.700432 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:42:27.700437 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:42:27.700443 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:42:27.700448 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:42:27.700453 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:42:27.700461 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:42:27.700467 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:42:27.700473 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:42:27.700478 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:42:27.700484 kernel: NUMA: Failed to initialise from firmware Sep 9 00:42:27.700489 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:42:27.700495 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Sep 9 00:42:27.700501 kernel: Zone ranges: Sep 9 00:42:27.700506 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:42:27.700513 kernel: DMA32 empty Sep 9 00:42:27.700518 kernel: Normal empty Sep 9 00:42:27.700524 kernel: Movable zone start for each node Sep 9 00:42:27.700529 kernel: Early memory node ranges Sep 9 00:42:27.700535 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 9 00:42:27.700541 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 9 00:42:27.700546 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 9 00:42:27.700552 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 9 00:42:27.700558 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 9 00:42:27.700563 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 9 00:42:27.700569 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 9 00:42:27.700574 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:42:27.700581 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:42:27.700587 kernel: psci: probing for conduit method from ACPI. Sep 9 00:42:27.700592 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:42:27.700598 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:42:27.700604 kernel: psci: Trusted OS migration not required Sep 9 00:42:27.700611 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:42:27.700618 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:42:27.700625 kernel: ACPI: SRAT not present Sep 9 00:42:27.700631 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 9 00:42:27.700637 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 9 00:42:27.700644 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:42:27.700649 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:42:27.700656 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:42:27.700662 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:42:27.700668 kernel: CPU features: detected: Spectre-v4 Sep 9 00:42:27.700674 kernel: CPU features: detected: Spectre-BHB Sep 9 00:42:27.700681 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:42:27.700687 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:42:27.700693 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:42:27.700708 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:42:27.700714 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:42:27.700720 kernel: Policy zone: DMA Sep 9 00:42:27.700727 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:42:27.700734 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:42:27.700740 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:42:27.700746 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:42:27.700752 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:42:27.700760 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114952K reserved, 0K cma-reserved) Sep 9 00:42:27.700766 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:42:27.700772 kernel: trace event string verifier disabled Sep 9 00:42:27.700778 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:42:27.700785 kernel: rcu: RCU event tracing is enabled. Sep 9 00:42:27.700791 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:42:27.700797 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:42:27.700804 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:42:27.700810 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:42:27.700816 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:42:27.700822 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:42:27.700829 kernel: GICv3: 256 SPIs implemented Sep 9 00:42:27.700835 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:42:27.700841 kernel: GICv3: Distributor has no Range Selector support Sep 9 00:42:27.700847 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:42:27.700853 kernel: GICv3: 16 PPIs implemented Sep 9 00:42:27.700859 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:42:27.700865 kernel: ACPI: SRAT not present Sep 9 00:42:27.700871 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:42:27.700877 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:42:27.700883 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:42:27.700890 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 9 00:42:27.700896 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 9 00:42:27.700903 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:42:27.700909 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:42:27.700915 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:42:27.700921 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:42:27.700927 kernel: arm-pv: using stolen time PV Sep 9 00:42:27.700934 kernel: Console: colour dummy device 80x25 Sep 9 00:42:27.700940 kernel: ACPI: Core revision 20210730 Sep 9 00:42:27.700946 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:42:27.700953 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:42:27.700959 kernel: LSM: Security Framework initializing Sep 9 00:42:27.700966 kernel: SELinux: Initializing. Sep 9 00:42:27.700972 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:42:27.700988 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:42:27.700994 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:42:27.701001 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:42:27.701007 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:42:27.701013 kernel: Remapping and enabling EFI services. Sep 9 00:42:27.701019 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:42:27.701025 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:42:27.701033 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:42:27.701039 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 9 00:42:27.701046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:42:27.701052 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:42:27.701058 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:42:27.701065 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:42:27.701071 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 9 00:42:27.701077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:42:27.701083 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:42:27.701090 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:42:27.701097 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:42:27.701103 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 9 00:42:27.701109 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:42:27.701116 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:42:27.701125 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:42:27.701133 kernel: SMP: Total of 4 processors activated. Sep 9 00:42:27.701139 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:42:27.701146 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:42:27.701152 kernel: CPU features: detected: Common not Private translations Sep 9 00:42:27.701159 kernel: CPU features: detected: CRC32 instructions Sep 9 00:42:27.701166 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:42:27.701172 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:42:27.701179 kernel: CPU features: detected: Privileged Access Never Sep 9 00:42:27.701186 kernel: CPU features: detected: RAS Extension Support Sep 9 00:42:27.701193 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:42:27.701199 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:42:27.701205 kernel: alternatives: patching kernel code Sep 9 00:42:27.701213 kernel: devtmpfs: initialized Sep 9 00:42:27.701219 kernel: KASLR enabled Sep 9 00:42:27.701226 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:42:27.701232 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:42:27.701239 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:42:27.701245 kernel: SMBIOS 3.0.0 present. Sep 9 00:42:27.701252 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 9 00:42:27.701258 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:42:27.701265 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:42:27.701273 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:42:27.701280 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:42:27.701286 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:42:27.701293 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Sep 9 00:42:27.701300 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:42:27.701306 kernel: cpuidle: using governor menu Sep 9 00:42:27.701312 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:42:27.701319 kernel: ASID allocator initialised with 32768 entries Sep 9 00:42:27.701325 kernel: ACPI: bus type PCI registered Sep 9 00:42:27.701333 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:42:27.701339 kernel: Serial: AMBA PL011 UART driver Sep 9 00:42:27.701346 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:42:27.701352 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:42:27.701359 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:42:27.701365 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:42:27.701372 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:42:27.701379 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:42:27.701385 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:42:27.701393 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:42:27.701400 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:42:27.701406 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 9 00:42:27.701412 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 9 00:42:27.701419 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 9 00:42:27.701426 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:42:27.701432 kernel: ACPI: Interpreter enabled Sep 9 00:42:27.701439 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:42:27.701445 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:42:27.701453 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:42:27.701459 kernel: printk: console [ttyAMA0] enabled Sep 9 00:42:27.701466 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:42:27.701579 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:42:27.701642 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:42:27.701710 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:42:27.701770 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:42:27.701829 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:42:27.701838 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:42:27.701844 kernel: PCI host bridge to bus 0000:00 Sep 9 00:42:27.701908 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:42:27.701960 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:42:27.702150 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:42:27.702221 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:42:27.702303 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:42:27.702377 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:42:27.702437 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:42:27.702497 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:42:27.702556 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:42:27.702614 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:42:27.702673 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:42:27.702751 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:42:27.702806 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:42:27.702860 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:42:27.702912 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:42:27.702920 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:42:27.702927 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:42:27.702934 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:42:27.702941 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:42:27.702950 kernel: iommu: Default domain type: Translated Sep 9 00:42:27.702957 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:42:27.702963 kernel: vgaarb: loaded Sep 9 00:42:27.702970 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 00:42:27.702990 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 00:42:27.702997 kernel: PTP clock support registered Sep 9 00:42:27.703004 kernel: Registered efivars operations Sep 9 00:42:27.703010 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:42:27.703017 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:42:27.703025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:42:27.703032 kernel: pnp: PnP ACPI init Sep 9 00:42:27.703098 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:42:27.703108 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:42:27.703114 kernel: NET: Registered PF_INET protocol family Sep 9 00:42:27.703121 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:42:27.703128 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:42:27.703135 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:42:27.703143 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:42:27.703150 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 9 00:42:27.703156 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:42:27.703163 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:42:27.703169 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:42:27.703176 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:42:27.703183 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:42:27.703189 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:42:27.703196 kernel: kvm [1]: HYP mode not available Sep 9 00:42:27.703204 kernel: Initialise system trusted keyrings Sep 9 00:42:27.703211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:42:27.703217 kernel: Key type asymmetric registered Sep 9 00:42:27.703224 kernel: Asymmetric key parser 'x509' registered Sep 9 00:42:27.703231 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 00:42:27.703237 kernel: io scheduler mq-deadline registered Sep 9 00:42:27.703244 kernel: io scheduler kyber registered Sep 9 00:42:27.703251 kernel: io scheduler bfq registered Sep 9 00:42:27.703257 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:42:27.703265 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:42:27.703272 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:42:27.703331 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:42:27.703341 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:42:27.703347 kernel: thunder_xcv, ver 1.0 Sep 9 00:42:27.703354 kernel: thunder_bgx, ver 1.0 Sep 9 00:42:27.703360 kernel: nicpf, ver 1.0 Sep 9 00:42:27.703367 kernel: nicvf, ver 1.0 Sep 9 00:42:27.703431 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:42:27.703488 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:42:27 UTC (1757378547) Sep 9 00:42:27.703496 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:42:27.703503 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:42:27.703510 kernel: Segment Routing with IPv6 Sep 9 00:42:27.703516 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:42:27.703523 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:42:27.703529 kernel: Key type dns_resolver registered Sep 9 00:42:27.703536 kernel: registered taskstats version 1 Sep 9 00:42:27.703543 kernel: Loading compiled-in X.509 certificates Sep 9 00:42:27.703550 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 14b3f28443a1a4b809c7c0337ab8c3dc8fdb5252' Sep 9 00:42:27.703557 kernel: Key type .fscrypt registered Sep 9 00:42:27.703563 kernel: Key type fscrypt-provisioning registered Sep 9 00:42:27.703570 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:42:27.703576 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:42:27.703583 kernel: ima: No architecture policies found Sep 9 00:42:27.703589 kernel: clk: Disabling unused clocks Sep 9 00:42:27.703596 kernel: Freeing unused kernel memory: 36416K Sep 9 00:42:27.703604 kernel: Run /init as init process Sep 9 00:42:27.703610 kernel: with arguments: Sep 9 00:42:27.703617 kernel: /init Sep 9 00:42:27.703623 kernel: with environment: Sep 9 00:42:27.703629 kernel: HOME=/ Sep 9 00:42:27.703635 kernel: TERM=linux Sep 9 00:42:27.703642 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:42:27.703651 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:42:27.703660 systemd[1]: Detected virtualization kvm. Sep 9 00:42:27.703668 systemd[1]: Detected architecture arm64. Sep 9 00:42:27.703674 systemd[1]: Running in initrd. Sep 9 00:42:27.703682 systemd[1]: No hostname configured, using default hostname. Sep 9 00:42:27.703689 systemd[1]: Hostname set to . Sep 9 00:42:27.703704 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:42:27.703712 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:42:27.703719 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:42:27.703727 systemd[1]: Reached target cryptsetup.target. Sep 9 00:42:27.703734 systemd[1]: Reached target paths.target. Sep 9 00:42:27.703741 systemd[1]: Reached target slices.target. Sep 9 00:42:27.703748 systemd[1]: Reached target swap.target. Sep 9 00:42:27.703755 systemd[1]: Reached target timers.target. Sep 9 00:42:27.703762 systemd[1]: Listening on iscsid.socket. Sep 9 00:42:27.703769 systemd[1]: Listening on iscsiuio.socket. Sep 9 00:42:27.703778 systemd[1]: Listening on systemd-journald-audit.socket. Sep 9 00:42:27.703785 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 9 00:42:27.703793 systemd[1]: Listening on systemd-journald.socket. Sep 9 00:42:27.703799 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:42:27.703806 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:42:27.703814 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:42:27.703820 systemd[1]: Reached target sockets.target. Sep 9 00:42:27.703827 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:42:27.703835 systemd[1]: Finished network-cleanup.service. Sep 9 00:42:27.703843 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:42:27.703850 systemd[1]: Starting systemd-journald.service... Sep 9 00:42:27.703857 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:42:27.703864 systemd[1]: Starting systemd-resolved.service... Sep 9 00:42:27.703871 systemd[1]: Starting systemd-vconsole-setup.service... Sep 9 00:42:27.703878 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:42:27.703885 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:42:27.703892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:42:27.703899 systemd[1]: Finished systemd-vconsole-setup.service. Sep 9 00:42:27.703907 systemd[1]: Starting dracut-cmdline-ask.service... Sep 9 00:42:27.703915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:42:27.703924 systemd-journald[290]: Journal started Sep 9 00:42:27.703964 systemd-journald[290]: Runtime Journal (/run/log/journal/0d4cdec9609040b68733c930e52cd975) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:42:27.676865 systemd-modules-load[291]: Inserted module 'overlay' Sep 9 00:42:27.706452 systemd-resolved[292]: Positive Trust Anchors: Sep 9 00:42:27.709126 systemd[1]: Started systemd-journald.service. Sep 9 00:42:27.709142 kernel: audit: type=1130 audit(1757378547.706:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.706468 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:42:27.706496 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:42:27.713304 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 9 00:42:27.722587 kernel: audit: type=1130 audit(1757378547.718:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.722608 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:42:27.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.714082 systemd[1]: Started systemd-resolved.service. Sep 9 00:42:27.725600 kernel: audit: type=1130 audit(1757378547.723:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.719033 systemd[1]: Finished dracut-cmdline-ask.service. Sep 9 00:42:27.727196 kernel: Bridge firewalling registered Sep 9 00:42:27.723263 systemd[1]: Reached target nss-lookup.target. Sep 9 00:42:27.726848 systemd[1]: Starting dracut-cmdline.service... Sep 9 00:42:27.727179 systemd-modules-load[291]: Inserted module 'br_netfilter' Sep 9 00:42:27.735430 dracut-cmdline[308]: dracut-dracut-053 Sep 9 00:42:27.737653 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:42:27.741337 kernel: SCSI subsystem initialized Sep 9 00:42:27.744996 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:42:27.745025 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:42:27.745034 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 9 00:42:27.748031 systemd-modules-load[291]: Inserted module 'dm_multipath' Sep 9 00:42:27.748960 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:42:27.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.750259 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:42:27.753337 kernel: audit: type=1130 audit(1757378547.748:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.757874 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:42:27.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.762008 kernel: audit: type=1130 audit(1757378547.757:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.801998 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:42:27.814000 kernel: iscsi: registered transport (tcp) Sep 9 00:42:27.829006 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:42:27.829037 kernel: QLogic iSCSI HBA Driver Sep 9 00:42:27.862087 systemd[1]: Finished dracut-cmdline.service. Sep 9 00:42:27.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.863399 systemd[1]: Starting dracut-pre-udev.service... Sep 9 00:42:27.866283 kernel: audit: type=1130 audit(1757378547.861:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:27.905007 kernel: raid6: neonx8 gen() 13722 MB/s Sep 9 00:42:27.921994 kernel: raid6: neonx8 xor() 10335 MB/s Sep 9 00:42:27.939005 kernel: raid6: neonx4 gen() 13542 MB/s Sep 9 00:42:27.956005 kernel: raid6: neonx4 xor() 10864 MB/s Sep 9 00:42:27.973003 kernel: raid6: neonx2 gen() 12950 MB/s Sep 9 00:42:27.990002 kernel: raid6: neonx2 xor() 10275 MB/s Sep 9 00:42:28.007002 kernel: raid6: neonx1 gen() 10551 MB/s Sep 9 00:42:28.024002 kernel: raid6: neonx1 xor() 8776 MB/s Sep 9 00:42:28.041001 kernel: raid6: int64x8 gen() 6272 MB/s Sep 9 00:42:28.058002 kernel: raid6: int64x8 xor() 3544 MB/s Sep 9 00:42:28.075001 kernel: raid6: int64x4 gen() 7226 MB/s Sep 9 00:42:28.091992 kernel: raid6: int64x4 xor() 3850 MB/s Sep 9 00:42:28.109001 kernel: raid6: int64x2 gen() 6153 MB/s Sep 9 00:42:28.126002 kernel: raid6: int64x2 xor() 3319 MB/s Sep 9 00:42:28.143001 kernel: raid6: int64x1 gen() 5047 MB/s Sep 9 00:42:28.160253 kernel: raid6: int64x1 xor() 2646 MB/s Sep 9 00:42:28.160275 kernel: raid6: using algorithm neonx8 gen() 13722 MB/s Sep 9 00:42:28.160292 kernel: raid6: .... xor() 10335 MB/s, rmw enabled Sep 9 00:42:28.160308 kernel: raid6: using neon recovery algorithm Sep 9 00:42:28.170992 kernel: xor: measuring software checksum speed Sep 9 00:42:28.171010 kernel: 8regs : 17209 MB/sec Sep 9 00:42:28.172450 kernel: 32regs : 19575 MB/sec Sep 9 00:42:28.172472 kernel: arm64_neon : 27691 MB/sec Sep 9 00:42:28.172488 kernel: xor: using function: arm64_neon (27691 MB/sec) Sep 9 00:42:28.223996 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 9 00:42:28.233765 systemd[1]: Finished dracut-pre-udev.service. Sep 9 00:42:28.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:28.235523 systemd[1]: Starting systemd-udevd.service... Sep 9 00:42:28.239270 kernel: audit: type=1130 audit(1757378548.233:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:28.239294 kernel: audit: type=1334 audit(1757378548.234:9): prog-id=7 op=LOAD Sep 9 00:42:28.239304 kernel: audit: type=1334 audit(1757378548.234:10): prog-id=8 op=LOAD Sep 9 00:42:28.234000 audit: BPF prog-id=7 op=LOAD Sep 9 00:42:28.234000 audit: BPF prog-id=8 op=LOAD Sep 9 00:42:28.248814 systemd-udevd[491]: Using default interface naming scheme 'v252'. Sep 9 00:42:28.252192 systemd[1]: Started systemd-udevd.service. Sep 9 00:42:28.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:28.253484 systemd[1]: Starting dracut-pre-trigger.service... Sep 9 00:42:28.264149 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Sep 9 00:42:28.289385 systemd[1]: Finished dracut-pre-trigger.service. Sep 9 00:42:28.290672 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:42:28.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:28.327052 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:42:28.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:28.357758 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:42:28.368088 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:42:28.368102 kernel: GPT:9289727 != 19775487 Sep 9 00:42:28.368110 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:42:28.368119 kernel: GPT:9289727 != 19775487 Sep 9 00:42:28.368127 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:42:28.368135 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:42:28.385294 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 9 00:42:28.388046 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (553) Sep 9 00:42:28.390945 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 9 00:42:28.391776 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 9 00:42:28.395881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 9 00:42:28.399119 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:42:28.400484 systemd[1]: Starting disk-uuid.service... Sep 9 00:42:28.407542 disk-uuid[563]: Primary Header is updated. Sep 9 00:42:28.407542 disk-uuid[563]: Secondary Entries is updated. Sep 9 00:42:28.407542 disk-uuid[563]: Secondary Header is updated. Sep 9 00:42:28.410722 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:42:29.417789 disk-uuid[564]: The operation has completed successfully. Sep 9 00:42:29.418638 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:42:29.445496 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:42:29.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.445587 systemd[1]: Finished disk-uuid.service. Sep 9 00:42:29.446990 systemd[1]: Starting verity-setup.service... Sep 9 00:42:29.459006 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:42:29.481904 systemd[1]: Found device dev-mapper-usr.device. Sep 9 00:42:29.483310 systemd[1]: Mounting sysusr-usr.mount... Sep 9 00:42:29.483996 systemd[1]: Finished verity-setup.service. Sep 9 00:42:29.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.529622 systemd[1]: Mounted sysusr-usr.mount. Sep 9 00:42:29.530683 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 9 00:42:29.530319 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 9 00:42:29.530939 systemd[1]: Starting ignition-setup.service... Sep 9 00:42:29.532851 systemd[1]: Starting parse-ip-for-networkd.service... Sep 9 00:42:29.540406 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:42:29.540439 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:42:29.540449 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:42:29.548275 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:42:29.553580 systemd[1]: Finished ignition-setup.service. Sep 9 00:42:29.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.554938 systemd[1]: Starting ignition-fetch-offline.service... Sep 9 00:42:29.610997 systemd[1]: Finished parse-ip-for-networkd.service. Sep 9 00:42:29.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.611088 ignition[650]: Ignition 2.14.0 Sep 9 00:42:29.611095 ignition[650]: Stage: fetch-offline Sep 9 00:42:29.612000 audit: BPF prog-id=9 op=LOAD Sep 9 00:42:29.614178 systemd[1]: Starting systemd-networkd.service... Sep 9 00:42:29.611131 ignition[650]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:42:29.611139 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:42:29.611269 ignition[650]: parsed url from cmdline: "" Sep 9 00:42:29.611272 ignition[650]: no config URL provided Sep 9 00:42:29.611276 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:42:29.611283 ignition[650]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:42:29.611302 ignition[650]: op(1): [started] loading QEMU firmware config module Sep 9 00:42:29.611307 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:42:29.616163 ignition[650]: op(1): [finished] loading QEMU firmware config module Sep 9 00:42:29.633956 systemd-networkd[740]: lo: Link UP Sep 9 00:42:29.633968 systemd-networkd[740]: lo: Gained carrier Sep 9 00:42:29.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.634356 systemd-networkd[740]: Enumeration completed Sep 9 00:42:29.634437 systemd[1]: Started systemd-networkd.service. Sep 9 00:42:29.634525 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:42:29.635538 systemd[1]: Reached target network.target. Sep 9 00:42:29.635564 systemd-networkd[740]: eth0: Link UP Sep 9 00:42:29.635568 systemd-networkd[740]: eth0: Gained carrier Sep 9 00:42:29.637627 systemd[1]: Starting iscsiuio.service... Sep 9 00:42:29.644759 systemd[1]: Started iscsiuio.service. Sep 9 00:42:29.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.646720 systemd[1]: Starting iscsid.service... Sep 9 00:42:29.650071 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:42:29.650071 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 9 00:42:29.650071 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 9 00:42:29.650071 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 9 00:42:29.650071 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:42:29.650071 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 9 00:42:29.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.652835 systemd[1]: Started iscsid.service. Sep 9 00:42:29.654047 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:42:29.658132 systemd[1]: Starting dracut-initqueue.service... Sep 9 00:42:29.668450 systemd[1]: Finished dracut-initqueue.service. Sep 9 00:42:29.669290 systemd[1]: Reached target remote-fs-pre.target. Sep 9 00:42:29.670612 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:42:29.672033 systemd[1]: Reached target remote-fs.target. Sep 9 00:42:29.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.674056 systemd[1]: Starting dracut-pre-mount.service... Sep 9 00:42:29.682542 systemd[1]: Finished dracut-pre-mount.service. Sep 9 00:42:29.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.689801 ignition[650]: parsing config with SHA512: 74320078e891984bc1c5709d10565cbdf8caa3064d4bd48eafd902fe9e74a26af51660df6c33368d9c6d2e03a9fcecad480865e187ca05a084ea9f3bd27dfdf2 Sep 9 00:42:29.697633 unknown[650]: fetched base config from "system" Sep 9 00:42:29.698274 ignition[650]: fetch-offline: fetch-offline passed Sep 9 00:42:29.697643 unknown[650]: fetched user config from "qemu" Sep 9 00:42:29.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.698334 ignition[650]: Ignition finished successfully Sep 9 00:42:29.699309 systemd[1]: Finished ignition-fetch-offline.service. Sep 9 00:42:29.700623 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:42:29.701291 systemd[1]: Starting ignition-kargs.service... Sep 9 00:42:29.710306 ignition[760]: Ignition 2.14.0 Sep 9 00:42:29.710318 ignition[760]: Stage: kargs Sep 9 00:42:29.710401 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:42:29.712334 systemd[1]: Finished ignition-kargs.service. Sep 9 00:42:29.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.710411 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:42:29.711250 ignition[760]: kargs: kargs passed Sep 9 00:42:29.714134 systemd[1]: Starting ignition-disks.service... Sep 9 00:42:29.711289 ignition[760]: Ignition finished successfully Sep 9 00:42:29.719836 ignition[766]: Ignition 2.14.0 Sep 9 00:42:29.719846 ignition[766]: Stage: disks Sep 9 00:42:29.719929 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:42:29.719938 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:42:29.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.721534 systemd[1]: Finished ignition-disks.service. Sep 9 00:42:29.720769 ignition[766]: disks: disks passed Sep 9 00:42:29.722283 systemd[1]: Reached target initrd-root-device.target. Sep 9 00:42:29.720807 ignition[766]: Ignition finished successfully Sep 9 00:42:29.723702 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:42:29.724729 systemd[1]: Reached target local-fs.target. Sep 9 00:42:29.725632 systemd[1]: Reached target sysinit.target. Sep 9 00:42:29.726806 systemd[1]: Reached target basic.target. Sep 9 00:42:29.728520 systemd[1]: Starting systemd-fsck-root.service... Sep 9 00:42:29.739742 systemd-fsck[774]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 9 00:42:29.744145 systemd[1]: Finished systemd-fsck-root.service. Sep 9 00:42:29.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.745541 systemd[1]: Mounting sysroot.mount... Sep 9 00:42:29.750994 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 9 00:42:29.751412 systemd[1]: Mounted sysroot.mount. Sep 9 00:42:29.752026 systemd[1]: Reached target initrd-root-fs.target. Sep 9 00:42:29.753901 systemd[1]: Mounting sysroot-usr.mount... Sep 9 00:42:29.754733 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 9 00:42:29.754771 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:42:29.754793 systemd[1]: Reached target ignition-diskful.target. Sep 9 00:42:29.756437 systemd[1]: Mounted sysroot-usr.mount. Sep 9 00:42:29.757822 systemd[1]: Starting initrd-setup-root.service... Sep 9 00:42:29.761988 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:42:29.766107 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:42:29.769880 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:42:29.773746 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:42:29.800332 systemd[1]: Finished initrd-setup-root.service. Sep 9 00:42:29.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.801649 systemd[1]: Starting ignition-mount.service... Sep 9 00:42:29.802854 systemd[1]: Starting sysroot-boot.service... Sep 9 00:42:29.806578 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Sep 9 00:42:29.814186 ignition[827]: INFO : Ignition 2.14.0 Sep 9 00:42:29.814186 ignition[827]: INFO : Stage: mount Sep 9 00:42:29.815530 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:42:29.815530 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:42:29.815530 ignition[827]: INFO : mount: mount passed Sep 9 00:42:29.815530 ignition[827]: INFO : Ignition finished successfully Sep 9 00:42:29.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:29.816585 systemd[1]: Finished ignition-mount.service. Sep 9 00:42:29.826472 systemd[1]: Finished sysroot-boot.service. Sep 9 00:42:29.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:30.492612 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 9 00:42:30.498995 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (836) Sep 9 00:42:30.501002 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:42:30.501027 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:42:30.501038 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:42:30.504948 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 9 00:42:30.506271 systemd[1]: Starting ignition-files.service... Sep 9 00:42:30.526870 ignition[856]: INFO : Ignition 2.14.0 Sep 9 00:42:30.526870 ignition[856]: INFO : Stage: files Sep 9 00:42:30.528157 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:42:30.528157 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:42:30.528157 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:42:30.532942 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:42:30.532942 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:42:30.532942 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:42:30.532942 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:42:30.532942 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:42:30.532778 unknown[856]: wrote ssh authorized keys file for user: core Sep 9 00:42:30.544706 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 9 00:42:30.544706 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 9 00:42:30.544706 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:42:30.544706 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 00:42:30.592279 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:42:30.676186 systemd-networkd[740]: eth0: Gained IPv6LL Sep 9 00:42:30.816606 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:42:30.816606 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:42:30.819755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 00:42:31.128989 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:42:31.481900 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:42:31.481900 ignition[856]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:42:31.485151 ignition[856]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:42:31.507714 ignition[856]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:42:31.507714 ignition[856]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:42:31.507714 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:42:31.507714 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:42:31.507714 ignition[856]: INFO : files: files passed Sep 9 00:42:31.507714 ignition[856]: INFO : Ignition finished successfully Sep 9 00:42:31.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.508450 systemd[1]: Finished ignition-files.service. Sep 9 00:42:31.510517 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 9 00:42:31.511591 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 9 00:42:31.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.520206 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 9 00:42:31.512287 systemd[1]: Starting ignition-quench.service... Sep 9 00:42:31.523136 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:42:31.515140 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:42:31.515220 systemd[1]: Finished ignition-quench.service. Sep 9 00:42:31.517806 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 9 00:42:31.518828 systemd[1]: Reached target ignition-complete.target. Sep 9 00:42:31.521316 systemd[1]: Starting initrd-parse-etc.service... Sep 9 00:42:31.536416 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:42:31.536502 systemd[1]: Finished initrd-parse-etc.service. Sep 9 00:42:31.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.537807 systemd[1]: Reached target initrd-fs.target. Sep 9 00:42:31.538794 systemd[1]: Reached target initrd.target. Sep 9 00:42:31.539968 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 9 00:42:31.540658 systemd[1]: Starting dracut-pre-pivot.service... Sep 9 00:42:31.551342 systemd[1]: Finished dracut-pre-pivot.service. Sep 9 00:42:31.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.552669 systemd[1]: Starting initrd-cleanup.service... Sep 9 00:42:31.561139 systemd[1]: Stopped target nss-lookup.target. Sep 9 00:42:31.561822 systemd[1]: Stopped target remote-cryptsetup.target. Sep 9 00:42:31.563043 systemd[1]: Stopped target timers.target. Sep 9 00:42:31.564214 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:42:31.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.564312 systemd[1]: Stopped dracut-pre-pivot.service. Sep 9 00:42:31.565362 systemd[1]: Stopped target initrd.target. Sep 9 00:42:31.566533 systemd[1]: Stopped target basic.target. Sep 9 00:42:31.567565 systemd[1]: Stopped target ignition-complete.target. Sep 9 00:42:31.568665 systemd[1]: Stopped target ignition-diskful.target. Sep 9 00:42:31.569686 systemd[1]: Stopped target initrd-root-device.target. Sep 9 00:42:31.570821 systemd[1]: Stopped target remote-fs.target. Sep 9 00:42:31.571887 systemd[1]: Stopped target remote-fs-pre.target. Sep 9 00:42:31.573233 systemd[1]: Stopped target sysinit.target. Sep 9 00:42:31.574245 systemd[1]: Stopped target local-fs.target. Sep 9 00:42:31.575272 systemd[1]: Stopped target local-fs-pre.target. Sep 9 00:42:31.576288 systemd[1]: Stopped target swap.target. Sep 9 00:42:31.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.577230 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:42:31.577335 systemd[1]: Stopped dracut-pre-mount.service. Sep 9 00:42:31.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.578390 systemd[1]: Stopped target cryptsetup.target. Sep 9 00:42:31.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.579342 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:42:31.579437 systemd[1]: Stopped dracut-initqueue.service. Sep 9 00:42:31.580620 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:42:31.580727 systemd[1]: Stopped ignition-fetch-offline.service. Sep 9 00:42:31.581746 systemd[1]: Stopped target paths.target. Sep 9 00:42:31.582686 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:42:31.586021 systemd[1]: Stopped systemd-ask-password-console.path. Sep 9 00:42:31.586769 systemd[1]: Stopped target slices.target. Sep 9 00:42:31.587987 systemd[1]: Stopped target sockets.target. Sep 9 00:42:31.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.589148 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:42:31.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.589256 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 9 00:42:31.590344 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:42:31.593988 iscsid[745]: iscsid shutting down. Sep 9 00:42:31.590431 systemd[1]: Stopped ignition-files.service. Sep 9 00:42:31.592228 systemd[1]: Stopping ignition-mount.service... Sep 9 00:42:31.594972 systemd[1]: Stopping iscsid.service... Sep 9 00:42:31.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.595715 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:42:31.595846 systemd[1]: Stopped kmod-static-nodes.service. Sep 9 00:42:31.600377 ignition[897]: INFO : Ignition 2.14.0 Sep 9 00:42:31.600377 ignition[897]: INFO : Stage: umount Sep 9 00:42:31.600377 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:42:31.600377 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:42:31.600377 ignition[897]: INFO : umount: umount passed Sep 9 00:42:31.600377 ignition[897]: INFO : Ignition finished successfully Sep 9 00:42:31.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.597664 systemd[1]: Stopping sysroot-boot.service... Sep 9 00:42:31.599817 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:42:31.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.599958 systemd[1]: Stopped systemd-udev-trigger.service. Sep 9 00:42:31.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.601204 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:42:31.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.602038 systemd[1]: Stopped dracut-pre-trigger.service. Sep 9 00:42:31.604184 systemd[1]: iscsid.service: Deactivated successfully. Sep 9 00:42:31.604272 systemd[1]: Stopped iscsid.service. Sep 9 00:42:31.605371 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:42:31.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.605448 systemd[1]: Stopped ignition-mount.service. Sep 9 00:42:31.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.606601 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:42:31.606667 systemd[1]: Closed iscsid.socket. Sep 9 00:42:31.607553 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:42:31.607592 systemd[1]: Stopped ignition-disks.service. Sep 9 00:42:31.608726 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:42:31.608763 systemd[1]: Stopped ignition-kargs.service. Sep 9 00:42:31.610841 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:42:31.610886 systemd[1]: Stopped ignition-setup.service. Sep 9 00:42:31.612154 systemd[1]: Stopping iscsiuio.service... Sep 9 00:42:31.613951 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:42:31.614419 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:42:31.614501 systemd[1]: Finished initrd-cleanup.service. Sep 9 00:42:31.615926 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 9 00:42:31.616017 systemd[1]: Stopped iscsiuio.service. Sep 9 00:42:31.617609 systemd[1]: Stopped target network.target. Sep 9 00:42:31.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.621530 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:42:31.621563 systemd[1]: Closed iscsiuio.socket. Sep 9 00:42:31.622526 systemd[1]: Stopping systemd-networkd.service... Sep 9 00:42:31.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.623704 systemd[1]: Stopping systemd-resolved.service... Sep 9 00:42:31.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.629114 systemd-networkd[740]: eth0: DHCPv6 lease lost Sep 9 00:42:31.630585 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:42:31.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.641000 audit: BPF prog-id=9 op=UNLOAD Sep 9 00:42:31.630684 systemd[1]: Stopped systemd-networkd.service. Sep 9 00:42:31.633307 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:42:31.633334 systemd[1]: Closed systemd-networkd.socket. Sep 9 00:42:31.635099 systemd[1]: Stopping network-cleanup.service... Sep 9 00:42:31.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.636339 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:42:31.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.636397 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 9 00:42:31.637467 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:42:31.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.637505 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:42:31.639357 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:42:31.651000 audit: BPF prog-id=6 op=UNLOAD Sep 9 00:42:31.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.639400 systemd[1]: Stopped systemd-modules-load.service. Sep 9 00:42:31.640916 systemd[1]: Stopping systemd-udevd.service... Sep 9 00:42:31.645200 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:42:31.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.645623 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:42:31.645715 systemd[1]: Stopped systemd-resolved.service. Sep 9 00:42:31.647398 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:42:31.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.647476 systemd[1]: Stopped sysroot-boot.service. Sep 9 00:42:31.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.648996 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:42:31.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.649047 systemd[1]: Stopped initrd-setup-root.service. Sep 9 00:42:31.651161 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:42:31.651240 systemd[1]: Stopped network-cleanup.service. Sep 9 00:42:31.654064 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:42:31.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.654175 systemd[1]: Stopped systemd-udevd.service. Sep 9 00:42:31.655132 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:42:31.655166 systemd[1]: Closed systemd-udevd-control.socket. Sep 9 00:42:31.656427 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:42:31.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:31.656456 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 9 00:42:31.657565 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:42:31.657604 systemd[1]: Stopped dracut-pre-udev.service. Sep 9 00:42:31.658755 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:42:31.658795 systemd[1]: Stopped dracut-cmdline.service. Sep 9 00:42:31.659857 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:42:31.659889 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 9 00:42:31.661621 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 9 00:42:31.662323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:42:31.662374 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 9 00:42:31.666884 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:42:31.676000 audit: BPF prog-id=8 op=UNLOAD Sep 9 00:42:31.676000 audit: BPF prog-id=7 op=UNLOAD Sep 9 00:42:31.666964 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 9 00:42:31.668248 systemd[1]: Reached target initrd-switch-root.target. Sep 9 00:42:31.677000 audit: BPF prog-id=5 op=UNLOAD Sep 9 00:42:31.677000 audit: BPF prog-id=4 op=UNLOAD Sep 9 00:42:31.677000 audit: BPF prog-id=3 op=UNLOAD Sep 9 00:42:31.670006 systemd[1]: Starting initrd-switch-root.service... Sep 9 00:42:31.676137 systemd[1]: Switching root. Sep 9 00:42:31.689328 systemd-journald[290]: Journal stopped Sep 9 00:42:33.724107 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Sep 9 00:42:33.724159 kernel: SELinux: Class mctp_socket not defined in policy. Sep 9 00:42:33.724171 kernel: SELinux: Class anon_inode not defined in policy. Sep 9 00:42:33.724182 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 9 00:42:33.724196 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:42:33.724206 kernel: SELinux: policy capability open_perms=1 Sep 9 00:42:33.724216 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:42:33.724228 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:42:33.724237 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:42:33.724247 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:42:33.724257 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:42:33.724270 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:42:33.724280 systemd[1]: Successfully loaded SELinux policy in 33.758ms. Sep 9 00:42:33.724295 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.999ms. Sep 9 00:42:33.724307 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:42:33.724318 systemd[1]: Detected virtualization kvm. Sep 9 00:42:33.724329 systemd[1]: Detected architecture arm64. Sep 9 00:42:33.724341 systemd[1]: Detected first boot. Sep 9 00:42:33.724352 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:42:33.724362 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 9 00:42:33.724372 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:42:33.724383 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:42:33.724394 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:42:33.724406 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:42:33.724417 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:42:33.724429 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 9 00:42:33.724440 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 9 00:42:33.724452 systemd[1]: Created slice system-addon\x2drun.slice. Sep 9 00:42:33.724464 systemd[1]: Created slice system-getty.slice. Sep 9 00:42:33.724476 systemd[1]: Created slice system-modprobe.slice. Sep 9 00:42:33.724486 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 9 00:42:33.724497 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 9 00:42:33.724507 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 9 00:42:33.724517 systemd[1]: Created slice user.slice. Sep 9 00:42:33.724528 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:42:33.724539 systemd[1]: Started systemd-ask-password-wall.path. Sep 9 00:42:33.724549 systemd[1]: Set up automount boot.automount. Sep 9 00:42:33.724559 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 9 00:42:33.724569 systemd[1]: Reached target integritysetup.target. Sep 9 00:42:33.724579 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:42:33.724590 systemd[1]: Reached target remote-fs.target. Sep 9 00:42:33.724600 systemd[1]: Reached target slices.target. Sep 9 00:42:33.724612 systemd[1]: Reached target swap.target. Sep 9 00:42:33.724622 systemd[1]: Reached target torcx.target. Sep 9 00:42:33.724635 systemd[1]: Reached target veritysetup.target. Sep 9 00:42:33.724645 systemd[1]: Listening on systemd-coredump.socket. Sep 9 00:42:33.724655 systemd[1]: Listening on systemd-initctl.socket. Sep 9 00:42:33.724665 systemd[1]: Listening on systemd-journald-audit.socket. Sep 9 00:42:33.724683 kernel: kauditd_printk_skb: 78 callbacks suppressed Sep 9 00:42:33.724695 kernel: audit: type=1400 audit(1757378553.650:82): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:42:33.724705 kernel: audit: type=1335 audit(1757378553.650:83): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 9 00:42:33.724717 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 9 00:42:33.724727 systemd[1]: Listening on systemd-journald.socket. Sep 9 00:42:33.724737 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:42:33.724748 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:42:33.724759 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:42:33.724769 systemd[1]: Listening on systemd-userdbd.socket. Sep 9 00:42:33.724779 systemd[1]: Mounting dev-hugepages.mount... Sep 9 00:42:33.724789 systemd[1]: Mounting dev-mqueue.mount... Sep 9 00:42:33.724799 systemd[1]: Mounting media.mount... Sep 9 00:42:33.724811 systemd[1]: Mounting sys-kernel-debug.mount... Sep 9 00:42:33.724821 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 9 00:42:33.724831 systemd[1]: Mounting tmp.mount... Sep 9 00:42:33.724841 systemd[1]: Starting flatcar-tmpfiles.service... Sep 9 00:42:33.724851 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:42:33.724862 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:42:33.724873 systemd[1]: Starting modprobe@configfs.service... Sep 9 00:42:33.724883 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:42:33.724893 systemd[1]: Starting modprobe@drm.service... Sep 9 00:42:33.724905 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:42:33.724916 systemd[1]: Starting modprobe@fuse.service... Sep 9 00:42:33.724927 systemd[1]: Starting modprobe@loop.service... Sep 9 00:42:33.724937 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:42:33.724948 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 9 00:42:33.724959 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 9 00:42:33.724969 systemd[1]: Starting systemd-journald.service... Sep 9 00:42:33.724986 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:42:33.724998 systemd[1]: Starting systemd-network-generator.service... Sep 9 00:42:33.725009 systemd[1]: Starting systemd-remount-fs.service... Sep 9 00:42:33.725019 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:42:33.725030 kernel: fuse: init (API version 7.34) Sep 9 00:42:33.725039 kernel: loop: module loaded Sep 9 00:42:33.725052 systemd[1]: Mounted dev-hugepages.mount. Sep 9 00:42:33.725062 systemd[1]: Mounted dev-mqueue.mount. Sep 9 00:42:33.725072 kernel: audit: type=1305 audit(1757378553.721:84): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 9 00:42:33.725083 kernel: audit: type=1300 audit(1757378553.721:84): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc5a0d650 a2=4000 a3=1 items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:33.725093 kernel: audit: type=1327 audit(1757378553.721:84): proctitle="/usr/lib/systemd/systemd-journald" Sep 9 00:42:33.725106 systemd-journald[1026]: Journal started Sep 9 00:42:33.725146 systemd-journald[1026]: Runtime Journal (/run/log/journal/0d4cdec9609040b68733c930e52cd975) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:42:33.650000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:42:33.650000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 9 00:42:33.721000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 9 00:42:33.721000 audit[1026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc5a0d650 a2=4000 a3=1 items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:33.721000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 9 00:42:33.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.732090 systemd[1]: Started systemd-journald.service. Sep 9 00:42:33.732116 kernel: audit: type=1130 audit(1757378553.731:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.732642 systemd[1]: Mounted media.mount. Sep 9 00:42:33.735050 systemd[1]: Mounted sys-kernel-debug.mount. Sep 9 00:42:33.736619 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 9 00:42:33.737460 systemd[1]: Mounted tmp.mount. Sep 9 00:42:33.738449 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:42:33.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.739412 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:42:33.739620 systemd[1]: Finished modprobe@configfs.service. Sep 9 00:42:33.742557 kernel: audit: type=1130 audit(1757378553.739:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.742604 kernel: audit: type=1130 audit(1757378553.740:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.742264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:42:33.742528 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:42:33.746270 kernel: audit: type=1131 audit(1757378553.741:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.747386 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:42:33.747692 systemd[1]: Finished modprobe@drm.service. Sep 9 00:42:33.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.750217 kernel: audit: type=1130 audit(1757378553.746:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.750194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:42:33.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.750430 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:42:33.751472 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:42:33.751701 systemd[1]: Finished modprobe@fuse.service. Sep 9 00:42:33.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.752591 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:42:33.752813 systemd[1]: Finished modprobe@loop.service. Sep 9 00:42:33.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.753904 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:42:33.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.755075 systemd[1]: Finished systemd-network-generator.service. Sep 9 00:42:33.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.756231 systemd[1]: Finished systemd-remount-fs.service. Sep 9 00:42:33.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.757248 systemd[1]: Reached target network-pre.target. Sep 9 00:42:33.759461 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 9 00:42:33.761363 systemd[1]: Mounting sys-kernel-config.mount... Sep 9 00:42:33.761989 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:42:33.769822 systemd[1]: Starting systemd-hwdb-update.service... Sep 9 00:42:33.771793 systemd[1]: Starting systemd-journal-flush.service... Sep 9 00:42:33.772690 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:42:33.773951 systemd[1]: Starting systemd-random-seed.service... Sep 9 00:42:33.774816 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:42:33.776113 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:42:33.778373 systemd[1]: Finished flatcar-tmpfiles.service. Sep 9 00:42:33.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.780842 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:42:33.781841 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 9 00:42:33.784076 systemd-journald[1026]: Time spent on flushing to /var/log/journal/0d4cdec9609040b68733c930e52cd975 is 22.753ms for 932 entries. Sep 9 00:42:33.784076 systemd-journald[1026]: System Journal (/var/log/journal/0d4cdec9609040b68733c930e52cd975) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:42:33.815525 systemd-journald[1026]: Received client request to flush runtime journal. Sep 9 00:42:33.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.783508 systemd[1]: Mounted sys-kernel-config.mount. Sep 9 00:42:33.785356 systemd[1]: Finished systemd-random-seed.service. Sep 9 00:42:33.786891 systemd[1]: Reached target first-boot-complete.target. Sep 9 00:42:33.816465 udevadm[1080]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 9 00:42:33.789359 systemd[1]: Starting systemd-sysusers.service... Sep 9 00:42:33.791638 systemd[1]: Starting systemd-udev-settle.service... Sep 9 00:42:33.799817 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:42:33.812543 systemd[1]: Finished systemd-sysusers.service. Sep 9 00:42:33.814689 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:42:33.816916 systemd[1]: Finished systemd-journal-flush.service. Sep 9 00:42:33.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:33.833246 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:42:33.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.140842 systemd[1]: Finished systemd-hwdb-update.service. Sep 9 00:42:34.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.142856 systemd[1]: Starting systemd-udevd.service... Sep 9 00:42:34.158156 systemd-udevd[1090]: Using default interface naming scheme 'v252'. Sep 9 00:42:34.171247 systemd[1]: Started systemd-udevd.service. Sep 9 00:42:34.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.173380 systemd[1]: Starting systemd-networkd.service... Sep 9 00:42:34.181161 systemd[1]: Starting systemd-userdbd.service... Sep 9 00:42:34.193234 systemd[1]: Found device dev-ttyAMA0.device. Sep 9 00:42:34.213606 systemd[1]: Started systemd-userdbd.service. Sep 9 00:42:34.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.228026 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:42:34.270221 systemd-networkd[1097]: lo: Link UP Sep 9 00:42:34.270235 systemd-networkd[1097]: lo: Gained carrier Sep 9 00:42:34.270618 systemd-networkd[1097]: Enumeration completed Sep 9 00:42:34.270758 systemd[1]: Started systemd-networkd.service. Sep 9 00:42:34.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.271654 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:42:34.272854 systemd-networkd[1097]: eth0: Link UP Sep 9 00:42:34.272867 systemd-networkd[1097]: eth0: Gained carrier Sep 9 00:42:34.278431 systemd[1]: Finished systemd-udev-settle.service. Sep 9 00:42:34.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.280464 systemd[1]: Starting lvm2-activation-early.service... Sep 9 00:42:34.288837 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:42:34.293158 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:42:34.315831 systemd[1]: Finished lvm2-activation-early.service. Sep 9 00:42:34.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.316720 systemd[1]: Reached target cryptsetup.target. Sep 9 00:42:34.318577 systemd[1]: Starting lvm2-activation.service... Sep 9 00:42:34.322111 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:42:34.345810 systemd[1]: Finished lvm2-activation.service. Sep 9 00:42:34.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.346630 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:42:34.347446 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:42:34.347477 systemd[1]: Reached target local-fs.target. Sep 9 00:42:34.348097 systemd[1]: Reached target machines.target. Sep 9 00:42:34.349882 systemd[1]: Starting ldconfig.service... Sep 9 00:42:34.350836 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.350915 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:42:34.352272 systemd[1]: Starting systemd-boot-update.service... Sep 9 00:42:34.354139 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 9 00:42:34.356231 systemd[1]: Starting systemd-machine-id-commit.service... Sep 9 00:42:34.358139 systemd[1]: Starting systemd-sysext.service... Sep 9 00:42:34.359313 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) Sep 9 00:42:34.360629 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 9 00:42:34.365458 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 9 00:42:34.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.377192 systemd[1]: Unmounting usr-share-oem.mount... Sep 9 00:42:34.382835 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 9 00:42:34.383233 systemd[1]: Unmounted usr-share-oem.mount. Sep 9 00:42:34.426994 kernel: loop0: detected capacity change from 0 to 203944 Sep 9 00:42:34.427315 systemd[1]: Finished systemd-machine-id-commit.service. Sep 9 00:42:34.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.437999 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:42:34.439763 systemd-fsck[1137]: fsck.fat 4.2 (2021-01-31) Sep 9 00:42:34.439763 systemd-fsck[1137]: /dev/vda1: 236 files, 117310/258078 clusters Sep 9 00:42:34.442147 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 9 00:42:34.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.457997 kernel: loop1: detected capacity change from 0 to 203944 Sep 9 00:42:34.462561 (sd-sysext)[1147]: Using extensions 'kubernetes'. Sep 9 00:42:34.463230 (sd-sysext)[1147]: Merged extensions into '/usr'. Sep 9 00:42:34.480255 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.481960 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:42:34.484256 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:42:34.486276 systemd[1]: Starting modprobe@loop.service... Sep 9 00:42:34.486937 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.487074 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:42:34.487775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:42:34.487920 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:42:34.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.489470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:42:34.489686 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:42:34.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.491086 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:42:34.491301 systemd[1]: Finished modprobe@loop.service. Sep 9 00:42:34.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.492623 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:42:34.492748 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.532360 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:42:34.535701 systemd[1]: Finished ldconfig.service. Sep 9 00:42:34.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.720606 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:42:34.722503 systemd[1]: Mounting boot.mount... Sep 9 00:42:34.724368 systemd[1]: Mounting usr-share-oem.mount... Sep 9 00:42:34.730839 systemd[1]: Mounted boot.mount. Sep 9 00:42:34.731751 systemd[1]: Mounted usr-share-oem.mount. Sep 9 00:42:34.733620 systemd[1]: Finished systemd-sysext.service. Sep 9 00:42:34.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.735834 systemd[1]: Starting ensure-sysext.service... Sep 9 00:42:34.737842 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 9 00:42:34.739180 systemd[1]: Finished systemd-boot-update.service. Sep 9 00:42:34.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.743085 systemd[1]: Reloading. Sep 9 00:42:34.746691 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 9 00:42:34.747404 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:42:34.748708 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:42:34.781692 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-09-09T00:42:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:42:34.781720 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-09-09T00:42:34Z" level=info msg="torcx already run" Sep 9 00:42:34.846313 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:42:34.846336 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:42:34.861724 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:42:34.906960 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 9 00:42:34.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.911075 systemd[1]: Starting audit-rules.service... Sep 9 00:42:34.912897 systemd[1]: Starting clean-ca-certificates.service... Sep 9 00:42:34.915384 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 9 00:42:34.917629 systemd[1]: Starting systemd-resolved.service... Sep 9 00:42:34.920218 systemd[1]: Starting systemd-timesyncd.service... Sep 9 00:42:34.922202 systemd[1]: Starting systemd-update-utmp.service... Sep 9 00:42:34.923610 systemd[1]: Finished clean-ca-certificates.service. Sep 9 00:42:34.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.926000 audit[1242]: SYSTEM_BOOT pid=1242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.927522 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:42:34.931681 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.933247 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:42:34.935110 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:42:34.937122 systemd[1]: Starting modprobe@loop.service... Sep 9 00:42:34.937773 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.937913 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:42:34.938029 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:42:34.938939 systemd[1]: Finished systemd-update-utmp.service. Sep 9 00:42:34.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.942382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:42:34.942529 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:42:34.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.945596 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 9 00:42:34.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.946813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:42:34.946960 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:42:34.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.948238 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:42:34.948437 systemd[1]: Finished modprobe@loop.service. Sep 9 00:42:34.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.951426 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.952771 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:42:34.954573 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:42:34.956492 systemd[1]: Starting modprobe@loop.service... Sep 9 00:42:34.957131 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.957284 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:42:34.958717 systemd[1]: Starting systemd-update-done.service... Sep 9 00:42:34.959493 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:42:34.960619 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:42:34.960809 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:42:34.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.962109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:42:34.962260 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:42:34.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.963461 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:42:34.965690 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.966930 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:42:34.969462 systemd[1]: Starting modprobe@drm.service... Sep 9 00:42:34.971284 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:42:34.972242 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.972395 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:42:34.974051 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 9 00:42:34.974893 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:42:34.975912 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:42:34.976091 systemd[1]: Finished modprobe@loop.service. Sep 9 00:42:34.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.977682 systemd[1]: Finished systemd-update-done.service. Sep 9 00:42:34.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:34.983489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:42:34.983630 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:42:34.984374 augenrules[1272]: No rules Sep 9 00:42:34.982000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 9 00:42:34.982000 audit[1272]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc06633d0 a2=420 a3=0 items=0 ppid=1230 pid=1272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:34.982000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 9 00:42:34.984843 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:42:34.985004 systemd[1]: Finished modprobe@drm.service. Sep 9 00:42:34.986178 systemd[1]: Finished audit-rules.service. Sep 9 00:42:34.987133 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:42:34.987293 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:42:34.988713 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:42:34.988781 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:42:34.990022 systemd[1]: Finished ensure-sysext.service. Sep 9 00:42:34.993316 systemd[1]: Started systemd-timesyncd.service. Sep 9 00:42:34.994140 systemd-timesyncd[1238]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:42:34.994195 systemd-timesyncd[1238]: Initial clock synchronization to Tue 2025-09-09 00:42:34.943522 UTC. Sep 9 00:42:34.994342 systemd[1]: Reached target time-set.target. Sep 9 00:42:34.998025 systemd-resolved[1235]: Positive Trust Anchors: Sep 9 00:42:34.998272 systemd-resolved[1235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:42:34.998373 systemd-resolved[1235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:42:35.006779 systemd-resolved[1235]: Defaulting to hostname 'linux'. Sep 9 00:42:35.008498 systemd[1]: Started systemd-resolved.service. Sep 9 00:42:35.009257 systemd[1]: Reached target network.target. Sep 9 00:42:35.009853 systemd[1]: Reached target nss-lookup.target. Sep 9 00:42:35.010521 systemd[1]: Reached target sysinit.target. Sep 9 00:42:35.011178 systemd[1]: Started motdgen.path. Sep 9 00:42:35.011718 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 9 00:42:35.012753 systemd[1]: Started logrotate.timer. Sep 9 00:42:35.013468 systemd[1]: Started mdadm.timer. Sep 9 00:42:35.014018 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 9 00:42:35.014637 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:42:35.014664 systemd[1]: Reached target paths.target. Sep 9 00:42:35.015233 systemd[1]: Reached target timers.target. Sep 9 00:42:35.016149 systemd[1]: Listening on dbus.socket. Sep 9 00:42:35.018048 systemd[1]: Starting docker.socket... Sep 9 00:42:35.019728 systemd[1]: Listening on sshd.socket. Sep 9 00:42:35.020502 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:42:35.020863 systemd[1]: Listening on docker.socket. Sep 9 00:42:35.021565 systemd[1]: Reached target sockets.target. Sep 9 00:42:35.022212 systemd[1]: Reached target basic.target. Sep 9 00:42:35.022994 systemd[1]: System is tainted: cgroupsv1 Sep 9 00:42:35.023048 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:42:35.023075 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:42:35.024218 systemd[1]: Starting containerd.service... Sep 9 00:42:35.026101 systemd[1]: Starting dbus.service... Sep 9 00:42:35.028047 systemd[1]: Starting enable-oem-cloudinit.service... Sep 9 00:42:35.030296 systemd[1]: Starting extend-filesystems.service... Sep 9 00:42:35.031122 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 9 00:42:35.032622 systemd[1]: Starting motdgen.service... Sep 9 00:42:35.033198 jq[1289]: false Sep 9 00:42:35.034861 systemd[1]: Starting prepare-helm.service... Sep 9 00:42:35.037521 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 9 00:42:35.039738 systemd[1]: Starting sshd-keygen.service... Sep 9 00:42:35.042424 systemd[1]: Starting systemd-logind.service... Sep 9 00:42:35.043069 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:42:35.043175 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:42:35.044621 systemd[1]: Starting update-engine.service... Sep 9 00:42:35.047416 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 9 00:42:35.051534 jq[1303]: true Sep 9 00:42:35.053742 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:42:35.054046 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 9 00:42:35.055220 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:42:35.055501 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 9 00:42:35.061864 extend-filesystems[1290]: Found loop1 Sep 9 00:42:35.062691 extend-filesystems[1290]: Found vda Sep 9 00:42:35.062691 extend-filesystems[1290]: Found vda1 Sep 9 00:42:35.062691 extend-filesystems[1290]: Found vda2 Sep 9 00:42:35.062691 extend-filesystems[1290]: Found vda3 Sep 9 00:42:35.062691 extend-filesystems[1290]: Found usr Sep 9 00:42:35.062691 extend-filesystems[1290]: Found vda4 Sep 9 00:42:35.062691 extend-filesystems[1290]: Found vda6 Sep 9 00:42:35.062691 extend-filesystems[1290]: Found vda7 Sep 9 00:42:35.062691 extend-filesystems[1290]: Found vda9 Sep 9 00:42:35.062691 extend-filesystems[1290]: Checking size of /dev/vda9 Sep 9 00:42:35.067064 dbus-daemon[1288]: [system] SELinux support is enabled Sep 9 00:42:35.067315 systemd[1]: Started dbus.service. Sep 9 00:42:35.070183 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:42:35.070239 systemd[1]: Reached target system-config.target. Sep 9 00:42:35.071191 jq[1315]: true Sep 9 00:42:35.071153 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:42:35.071170 systemd[1]: Reached target user-config.target. Sep 9 00:42:35.078051 tar[1314]: linux-arm64/helm Sep 9 00:42:35.081342 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:42:35.081584 systemd[1]: Finished motdgen.service. Sep 9 00:42:35.099156 extend-filesystems[1290]: Resized partition /dev/vda9 Sep 9 00:42:35.103297 extend-filesystems[1331]: resize2fs 1.46.5 (30-Dec-2021) Sep 9 00:42:35.109994 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:42:35.129734 update_engine[1302]: I0909 00:42:35.128599 1302 main.cc:92] Flatcar Update Engine starting Sep 9 00:42:35.131856 update_engine[1302]: I0909 00:42:35.131810 1302 update_check_scheduler.cc:74] Next update check in 9m31s Sep 9 00:42:35.131843 systemd[1]: Started update-engine.service. Sep 9 00:42:35.134609 systemd[1]: Started locksmithd.service. Sep 9 00:42:35.138276 systemd-logind[1299]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:42:35.138565 systemd-logind[1299]: New seat seat0. Sep 9 00:42:35.140499 systemd[1]: Started systemd-logind.service. Sep 9 00:42:35.151409 env[1317]: time="2025-09-09T00:42:35.151356820Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 9 00:42:35.168674 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:42:35.170800 extend-filesystems[1331]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:42:35.170800 extend-filesystems[1331]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:42:35.170800 extend-filesystems[1331]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.170675748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.170810173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.172396567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.172425503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.172758307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.174062336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.174081812Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.174092663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.174185433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:42:35.180183 env[1317]: time="2025-09-09T00:42:35.174466565Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:42:35.173127 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:42:35.180448 extend-filesystems[1290]: Resized filesystem in /dev/vda9 Sep 9 00:42:35.181373 bash[1345]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:42:35.181466 env[1317]: time="2025-09-09T00:42:35.174686845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:42:35.181466 env[1317]: time="2025-09-09T00:42:35.174707752Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:42:35.181466 env[1317]: time="2025-09-09T00:42:35.174770394Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 9 00:42:35.181466 env[1317]: time="2025-09-09T00:42:35.174786173Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:42:35.173453 systemd[1]: Finished extend-filesystems.service. Sep 9 00:42:35.181417 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 9 00:42:35.184699 env[1317]: time="2025-09-09T00:42:35.184566310Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:42:35.184699 env[1317]: time="2025-09-09T00:42:35.184666632Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:42:35.184699 env[1317]: time="2025-09-09T00:42:35.184687300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:42:35.184824 env[1317]: time="2025-09-09T00:42:35.184717985Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:42:35.184824 env[1317]: time="2025-09-09T00:42:35.184732811Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:42:35.184824 env[1317]: time="2025-09-09T00:42:35.184746444Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:42:35.184824 env[1317]: time="2025-09-09T00:42:35.184758369Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:42:35.185585 env[1317]: time="2025-09-09T00:42:35.185390788Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:42:35.185653 env[1317]: time="2025-09-09T00:42:35.185592068Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 9 00:42:35.185653 env[1317]: time="2025-09-09T00:42:35.185608324Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:42:35.185653 env[1317]: time="2025-09-09T00:42:35.185632570Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:42:35.185653 env[1317]: time="2025-09-09T00:42:35.185645409Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:42:35.185944 env[1317]: time="2025-09-09T00:42:35.185851419Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:42:35.186093 env[1317]: time="2025-09-09T00:42:35.186041371Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186474457Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186527083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186542386Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186647716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186660912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186673194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186684681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186697400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186708728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186719579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186731384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186745653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186876978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186896733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.187913 env[1317]: time="2025-09-09T00:42:35.186910406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.189311 env[1317]: time="2025-09-09T00:42:35.186922648Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:42:35.189311 env[1317]: time="2025-09-09T00:42:35.186936997Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 9 00:42:35.189311 env[1317]: time="2025-09-09T00:42:35.186948841Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:42:35.189311 env[1317]: time="2025-09-09T00:42:35.186965098Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 9 00:42:35.189311 env[1317]: time="2025-09-09T00:42:35.187014504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:42:35.188390 systemd[1]: Started containerd.service. Sep 9 00:42:35.189494 env[1317]: time="2025-09-09T00:42:35.187202031Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:42:35.189494 env[1317]: time="2025-09-09T00:42:35.187255810Z" level=info msg="Connect containerd service" Sep 9 00:42:35.189494 env[1317]: time="2025-09-09T00:42:35.187289873Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:42:35.189494 env[1317]: time="2025-09-09T00:42:35.187842479Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:42:35.189494 env[1317]: time="2025-09-09T00:42:35.188213083Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:42:35.189494 env[1317]: time="2025-09-09T00:42:35.188253864Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:42:35.189494 env[1317]: time="2025-09-09T00:42:35.188301322Z" level=info msg="containerd successfully booted in 0.038868s" Sep 9 00:42:35.192870 env[1317]: time="2025-09-09T00:42:35.192256653Z" level=info msg="Start subscribing containerd event" Sep 9 00:42:35.192870 env[1317]: time="2025-09-09T00:42:35.192395053Z" level=info msg="Start recovering state" Sep 9 00:42:35.194535 env[1317]: time="2025-09-09T00:42:35.194118337Z" level=info msg="Start event monitor" Sep 9 00:42:35.194535 env[1317]: time="2025-09-09T00:42:35.194156931Z" level=info msg="Start snapshots syncer" Sep 9 00:42:35.194535 env[1317]: time="2025-09-09T00:42:35.194168975Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:42:35.194535 env[1317]: time="2025-09-09T00:42:35.194176765Z" level=info msg="Start streaming server" Sep 9 00:42:35.211016 locksmithd[1347]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:42:35.449168 tar[1314]: linux-arm64/LICENSE Sep 9 00:42:35.449168 tar[1314]: linux-arm64/README.md Sep 9 00:42:35.453330 systemd[1]: Finished prepare-helm.service. Sep 9 00:42:36.182163 systemd-networkd[1097]: eth0: Gained IPv6LL Sep 9 00:42:36.183836 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 9 00:42:36.184941 systemd[1]: Reached target network-online.target. Sep 9 00:42:36.187656 systemd[1]: Starting kubelet.service... Sep 9 00:42:36.667365 sshd_keygen[1308]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:42:36.685351 systemd[1]: Finished sshd-keygen.service. Sep 9 00:42:36.687728 systemd[1]: Starting issuegen.service... Sep 9 00:42:36.692825 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:42:36.693079 systemd[1]: Finished issuegen.service. Sep 9 00:42:36.695383 systemd[1]: Starting systemd-user-sessions.service... Sep 9 00:42:36.701599 systemd[1]: Finished systemd-user-sessions.service. Sep 9 00:42:36.704016 systemd[1]: Started getty@tty1.service. Sep 9 00:42:36.706153 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 9 00:42:36.707277 systemd[1]: Reached target getty.target. Sep 9 00:42:36.789829 systemd[1]: Started kubelet.service. Sep 9 00:42:36.791069 systemd[1]: Reached target multi-user.target. Sep 9 00:42:36.793058 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 9 00:42:36.799338 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 9 00:42:36.799550 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 9 00:42:36.801109 systemd[1]: Startup finished in 4.739s (kernel) + 5.064s (userspace) = 9.804s. Sep 9 00:42:37.200026 kubelet[1390]: E0909 00:42:37.199960 1390 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:42:37.202044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:42:37.202179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:42:40.195006 systemd[1]: Created slice system-sshd.slice. Sep 9 00:42:40.196136 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:43620.service. Sep 9 00:42:40.245994 sshd[1400]: Accepted publickey for core from 10.0.0.1 port 43620 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:42:40.249123 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:42:40.258998 systemd[1]: Created slice user-500.slice. Sep 9 00:42:40.260066 systemd[1]: Starting user-runtime-dir@500.service... Sep 9 00:42:40.263273 systemd-logind[1299]: New session 1 of user core. Sep 9 00:42:40.269084 systemd[1]: Finished user-runtime-dir@500.service. Sep 9 00:42:40.270365 systemd[1]: Starting user@500.service... Sep 9 00:42:40.276033 (systemd)[1405]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:42:40.349407 systemd[1405]: Queued start job for default target default.target. Sep 9 00:42:40.349637 systemd[1405]: Reached target paths.target. Sep 9 00:42:40.349652 systemd[1405]: Reached target sockets.target. Sep 9 00:42:40.349664 systemd[1405]: Reached target timers.target. Sep 9 00:42:40.349674 systemd[1405]: Reached target basic.target. Sep 9 00:42:40.349713 systemd[1405]: Reached target default.target. Sep 9 00:42:40.349733 systemd[1405]: Startup finished in 67ms. Sep 9 00:42:40.349833 systemd[1]: Started user@500.service. Sep 9 00:42:40.350745 systemd[1]: Started session-1.scope. Sep 9 00:42:40.403600 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:43636.service. Sep 9 00:42:40.442847 sshd[1414]: Accepted publickey for core from 10.0.0.1 port 43636 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:42:40.444382 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:42:40.449484 systemd-logind[1299]: New session 2 of user core. Sep 9 00:42:40.450546 systemd[1]: Started session-2.scope. Sep 9 00:42:40.503928 sshd[1414]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:40.506345 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:43650.service. Sep 9 00:42:40.506816 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:43636.service: Deactivated successfully. Sep 9 00:42:40.507629 systemd-logind[1299]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:42:40.507690 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:42:40.508491 systemd-logind[1299]: Removed session 2. Sep 9 00:42:40.545737 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 43650 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:42:40.547317 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:42:40.551342 systemd[1]: Started session-3.scope. Sep 9 00:42:40.551629 systemd-logind[1299]: New session 3 of user core. Sep 9 00:42:40.607666 sshd[1420]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:40.609828 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:43652.service. Sep 9 00:42:40.611803 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:43650.service: Deactivated successfully. Sep 9 00:42:40.612476 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:42:40.613053 systemd-logind[1299]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:42:40.613671 systemd-logind[1299]: Removed session 3. Sep 9 00:42:40.655471 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 43652 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:42:40.657066 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:42:40.661033 systemd-logind[1299]: New session 4 of user core. Sep 9 00:42:40.661314 systemd[1]: Started session-4.scope. Sep 9 00:42:40.726870 sshd[1426]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:40.729847 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:43668.service. Sep 9 00:42:40.731706 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:43652.service: Deactivated successfully. Sep 9 00:42:40.732768 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:42:40.733586 systemd-logind[1299]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:42:40.734336 systemd-logind[1299]: Removed session 4. Sep 9 00:42:40.785834 sshd[1433]: Accepted publickey for core from 10.0.0.1 port 43668 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:42:40.787093 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:42:40.792616 systemd-logind[1299]: New session 5 of user core. Sep 9 00:42:40.793470 systemd[1]: Started session-5.scope. Sep 9 00:42:40.866327 sudo[1439]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:42:40.866625 sudo[1439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:42:40.879241 dbus-daemon[1288]: avc: received setenforce notice (enforcing=1) Sep 9 00:42:40.881055 sudo[1439]: pam_unix(sudo:session): session closed for user root Sep 9 00:42:40.883687 sshd[1433]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:40.887465 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:43682.service. Sep 9 00:42:40.887888 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:43668.service: Deactivated successfully. Sep 9 00:42:40.893121 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:42:40.893706 systemd-logind[1299]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:42:40.894852 systemd-logind[1299]: Removed session 5. Sep 9 00:42:40.931893 sshd[1442]: Accepted publickey for core from 10.0.0.1 port 43682 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:42:40.933288 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:42:40.936997 systemd-logind[1299]: New session 6 of user core. Sep 9 00:42:40.938240 systemd[1]: Started session-6.scope. Sep 9 00:42:40.997027 sudo[1448]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:42:40.998260 sudo[1448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:42:41.001223 sudo[1448]: pam_unix(sudo:session): session closed for user root Sep 9 00:42:41.005749 sudo[1447]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:42:41.005999 sudo[1447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:42:41.015218 systemd[1]: Stopping audit-rules.service... Sep 9 00:42:41.015000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 9 00:42:41.017836 auditctl[1451]: No rules Sep 9 00:42:41.018483 kernel: kauditd_printk_skb: 59 callbacks suppressed Sep 9 00:42:41.018531 kernel: audit: type=1305 audit(1757378561.015:147): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 9 00:42:41.018548 kernel: audit: type=1300 audit(1757378561.015:147): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeaec9830 a2=420 a3=0 items=0 ppid=1 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.015000 audit[1451]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeaec9830 a2=420 a3=0 items=0 ppid=1 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.018835 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:42:41.019063 systemd[1]: Stopped audit-rules.service. Sep 9 00:42:41.025264 kernel: audit: type=1327 audit(1757378561.015:147): proctitle=2F7362696E2F617564697463746C002D44 Sep 9 00:42:41.025320 kernel: audit: type=1131 audit(1757378561.017:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.015000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 9 00:42:41.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.026532 systemd[1]: Starting audit-rules.service... Sep 9 00:42:41.047014 augenrules[1469]: No rules Sep 9 00:42:41.048055 systemd[1]: Finished audit-rules.service. Sep 9 00:42:41.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.050997 kernel: audit: type=1130 audit(1757378561.047:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.051888 sudo[1447]: pam_unix(sudo:session): session closed for user root Sep 9 00:42:41.050000 audit[1447]: USER_END pid=1447 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.054896 sshd[1442]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:41.050000 audit[1447]: CRED_DISP pid=1447 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.057702 kernel: audit: type=1106 audit(1757378561.050:150): pid=1447 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.057760 kernel: audit: type=1104 audit(1757378561.050:151): pid=1447 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.057916 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:43692.service. Sep 9 00:42:41.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.119:22-10.0.0.1:43692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.065728 kernel: audit: type=1130 audit(1757378561.057:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.119:22-10.0.0.1:43692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.065791 kernel: audit: type=1106 audit(1757378561.059:153): pid=1442 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:42:41.065808 kernel: audit: type=1104 audit(1757378561.060:154): pid=1442 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:42:41.059000 audit[1442]: USER_END pid=1442 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:42:41.060000 audit[1442]: CRED_DISP pid=1442 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:42:41.065747 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:43682.service: Deactivated successfully. Sep 9 00:42:41.066415 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:42:41.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.119:22-10.0.0.1:43682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.072662 systemd-logind[1299]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:42:41.073713 systemd-logind[1299]: Removed session 6. Sep 9 00:42:41.100000 audit[1474]: USER_ACCT pid=1474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:42:41.101648 sshd[1474]: Accepted publickey for core from 10.0.0.1 port 43692 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:42:41.101000 audit[1474]: CRED_ACQ pid=1474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:42:41.101000 audit[1474]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd10e5b0 a2=3 a3=1 items=0 ppid=1 pid=1474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.101000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:42:41.102809 sshd[1474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:42:41.107887 systemd[1]: Started session-7.scope. Sep 9 00:42:41.108465 systemd-logind[1299]: New session 7 of user core. Sep 9 00:42:41.113000 audit[1474]: USER_START pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:42:41.114000 audit[1479]: CRED_ACQ pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:42:41.161000 audit[1480]: USER_ACCT pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.161000 audit[1480]: CRED_REFR pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.162756 sudo[1480]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:42:41.163000 sudo[1480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:42:41.164000 audit[1480]: USER_START pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.205266 systemd[1]: Starting docker.service... Sep 9 00:42:41.274451 env[1492]: time="2025-09-09T00:42:41.272646115Z" level=info msg="Starting up" Sep 9 00:42:41.278497 env[1492]: time="2025-09-09T00:42:41.278456592Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:42:41.278497 env[1492]: time="2025-09-09T00:42:41.278489618Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:42:41.278591 env[1492]: time="2025-09-09T00:42:41.278512713Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:42:41.278591 env[1492]: time="2025-09-09T00:42:41.278523043Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:42:41.281827 env[1492]: time="2025-09-09T00:42:41.281789475Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:42:41.281827 env[1492]: time="2025-09-09T00:42:41.281820307Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:42:41.281929 env[1492]: time="2025-09-09T00:42:41.281837339Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:42:41.281929 env[1492]: time="2025-09-09T00:42:41.281846672Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:42:41.290129 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4026166706-merged.mount: Deactivated successfully. Sep 9 00:42:41.481286 env[1492]: time="2025-09-09T00:42:41.481245275Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 9 00:42:41.481462 env[1492]: time="2025-09-09T00:42:41.481449135Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 9 00:42:41.481626 env[1492]: time="2025-09-09T00:42:41.481612670Z" level=info msg="Loading containers: start." Sep 9 00:42:41.545000 audit[1525]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.545000 audit[1525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc69d3700 a2=0 a3=1 items=0 ppid=1492 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.545000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 9 00:42:41.547000 audit[1527]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.547000 audit[1527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffec6dc080 a2=0 a3=1 items=0 ppid=1492 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.547000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 9 00:42:41.549000 audit[1529]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.549000 audit[1529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffda451c80 a2=0 a3=1 items=0 ppid=1492 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.549000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 9 00:42:41.551000 audit[1531]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.551000 audit[1531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe3c45d90 a2=0 a3=1 items=0 ppid=1492 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.551000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 9 00:42:41.554000 audit[1533]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.554000 audit[1533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcd684de0 a2=0 a3=1 items=0 ppid=1492 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.554000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 9 00:42:41.593000 audit[1538]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.593000 audit[1538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc0a3a580 a2=0 a3=1 items=0 ppid=1492 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.593000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 9 00:42:41.603000 audit[1540]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.603000 audit[1540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff565d7d0 a2=0 a3=1 items=0 ppid=1492 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.603000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 9 00:42:41.604000 audit[1542]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.604000 audit[1542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffed598ef0 a2=0 a3=1 items=0 ppid=1492 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.604000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 9 00:42:41.606000 audit[1544]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.606000 audit[1544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffe0dd2230 a2=0 a3=1 items=0 ppid=1492 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.606000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 9 00:42:41.618000 audit[1548]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.618000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffce8a9bd0 a2=0 a3=1 items=0 ppid=1492 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.618000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 9 00:42:41.626000 audit[1549]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.626000 audit[1549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffce825b10 a2=0 a3=1 items=0 ppid=1492 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.626000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 9 00:42:41.637002 kernel: Initializing XFRM netlink socket Sep 9 00:42:41.659599 env[1492]: time="2025-09-09T00:42:41.659558873Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 9 00:42:41.693000 audit[1557]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.693000 audit[1557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffc6c79c20 a2=0 a3=1 items=0 ppid=1492 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.693000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 9 00:42:41.713000 audit[1560]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.713000 audit[1560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc6753410 a2=0 a3=1 items=0 ppid=1492 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.713000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 9 00:42:41.720000 audit[1563]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.720000 audit[1563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc9576660 a2=0 a3=1 items=0 ppid=1492 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.720000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 9 00:42:41.722000 audit[1565]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.722000 audit[1565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc1ce1d90 a2=0 a3=1 items=0 ppid=1492 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.722000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 9 00:42:41.724000 audit[1567]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.724000 audit[1567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff6dee5d0 a2=0 a3=1 items=0 ppid=1492 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.724000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 9 00:42:41.727000 audit[1569]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.727000 audit[1569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe953c470 a2=0 a3=1 items=0 ppid=1492 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.727000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 9 00:42:41.729000 audit[1571]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.729000 audit[1571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffcdff9480 a2=0 a3=1 items=0 ppid=1492 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.729000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 9 00:42:41.744000 audit[1574]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.744000 audit[1574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffc2c4ef60 a2=0 a3=1 items=0 ppid=1492 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.744000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 9 00:42:41.746000 audit[1576]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.746000 audit[1576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffeb2cb840 a2=0 a3=1 items=0 ppid=1492 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.746000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 9 00:42:41.748000 audit[1578]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.748000 audit[1578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd5f95870 a2=0 a3=1 items=0 ppid=1492 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.748000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 9 00:42:41.752000 audit[1580]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.752000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd7221aa0 a2=0 a3=1 items=0 ppid=1492 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.752000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 9 00:42:41.754233 systemd-networkd[1097]: docker0: Link UP Sep 9 00:42:41.764000 audit[1584]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.764000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc5e650e0 a2=0 a3=1 items=0 ppid=1492 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.764000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 9 00:42:41.778000 audit[1585]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:42:41.778000 audit[1585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd54b64d0 a2=0 a3=1 items=0 ppid=1492 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:42:41.778000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 9 00:42:41.779624 env[1492]: time="2025-09-09T00:42:41.779563834Z" level=info msg="Loading containers: done." Sep 9 00:42:41.802176 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck835684843-merged.mount: Deactivated successfully. Sep 9 00:42:41.813960 env[1492]: time="2025-09-09T00:42:41.813899719Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:42:41.814148 env[1492]: time="2025-09-09T00:42:41.814111636Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 9 00:42:41.814243 env[1492]: time="2025-09-09T00:42:41.814215939Z" level=info msg="Daemon has completed initialization" Sep 9 00:42:41.832595 systemd[1]: Started docker.service. Sep 9 00:42:41.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:41.842695 env[1492]: time="2025-09-09T00:42:41.839660068Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:42:42.532615 env[1317]: time="2025-09-09T00:42:42.532359096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 00:42:43.133948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031682799.mount: Deactivated successfully. Sep 9 00:42:44.363099 env[1317]: time="2025-09-09T00:42:44.363046567Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:44.364503 env[1317]: time="2025-09-09T00:42:44.364473491Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:44.366250 env[1317]: time="2025-09-09T00:42:44.366202919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:44.367734 env[1317]: time="2025-09-09T00:42:44.367706578Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:44.368624 env[1317]: time="2025-09-09T00:42:44.368590535Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 00:42:44.370012 env[1317]: time="2025-09-09T00:42:44.369972026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 00:42:45.654316 env[1317]: time="2025-09-09T00:42:45.654271873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:45.655931 env[1317]: time="2025-09-09T00:42:45.655905833Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:45.658120 env[1317]: time="2025-09-09T00:42:45.658084047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:45.659945 env[1317]: time="2025-09-09T00:42:45.659898785Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:45.660793 env[1317]: time="2025-09-09T00:42:45.660762787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 00:42:45.663475 env[1317]: time="2025-09-09T00:42:45.663437414Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 00:42:46.865802 env[1317]: time="2025-09-09T00:42:46.865756152Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:46.867470 env[1317]: time="2025-09-09T00:42:46.867429754Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:46.869214 env[1317]: time="2025-09-09T00:42:46.869182401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:46.871874 env[1317]: time="2025-09-09T00:42:46.871846162Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:46.872540 env[1317]: time="2025-09-09T00:42:46.872514189Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 00:42:46.873054 env[1317]: time="2025-09-09T00:42:46.873026722Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 00:42:47.307398 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:42:47.309449 kernel: kauditd_printk_skb: 84 callbacks suppressed Sep 9 00:42:47.309494 kernel: audit: type=1130 audit(1757378567.306:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:47.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:47.307571 systemd[1]: Stopped kubelet.service. Sep 9 00:42:47.309192 systemd[1]: Starting kubelet.service... Sep 9 00:42:47.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:47.312176 kernel: audit: type=1131 audit(1757378567.306:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:47.421215 systemd[1]: Started kubelet.service. Sep 9 00:42:47.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:47.426997 kernel: audit: type=1130 audit(1757378567.420:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:47.477605 kubelet[1632]: E0909 00:42:47.477502 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:42:47.480149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:42:47.480300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:42:47.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 9 00:42:47.483012 kernel: audit: type=1131 audit(1757378567.479:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 9 00:42:47.983140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857582513.mount: Deactivated successfully. Sep 9 00:42:48.558780 env[1317]: time="2025-09-09T00:42:48.558726403Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:48.560884 env[1317]: time="2025-09-09T00:42:48.560842644Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:48.563633 env[1317]: time="2025-09-09T00:42:48.563606044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:48.565533 env[1317]: time="2025-09-09T00:42:48.565499573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:48.566012 env[1317]: time="2025-09-09T00:42:48.565967252Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 00:42:48.566524 env[1317]: time="2025-09-09T00:42:48.566490828Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:42:49.202218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953151666.mount: Deactivated successfully. Sep 9 00:42:50.137487 env[1317]: time="2025-09-09T00:42:50.137413000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:50.140413 env[1317]: time="2025-09-09T00:42:50.140373754Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:50.143601 env[1317]: time="2025-09-09T00:42:50.143562473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:50.145387 env[1317]: time="2025-09-09T00:42:50.145357901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:50.146210 env[1317]: time="2025-09-09T00:42:50.146178561Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:42:50.146683 env[1317]: time="2025-09-09T00:42:50.146661309Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:42:50.594608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818564087.mount: Deactivated successfully. Sep 9 00:42:50.604039 env[1317]: time="2025-09-09T00:42:50.603999092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:50.605963 env[1317]: time="2025-09-09T00:42:50.605925448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:50.607491 env[1317]: time="2025-09-09T00:42:50.607460179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:50.609034 env[1317]: time="2025-09-09T00:42:50.609005900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:50.609556 env[1317]: time="2025-09-09T00:42:50.609515625Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:42:50.610087 env[1317]: time="2025-09-09T00:42:50.610061000Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 00:42:51.130445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1085347153.mount: Deactivated successfully. Sep 9 00:42:53.218060 env[1317]: time="2025-09-09T00:42:53.217450710Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:53.222744 env[1317]: time="2025-09-09T00:42:53.222661133Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:53.225746 env[1317]: time="2025-09-09T00:42:53.225697437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:53.228726 env[1317]: time="2025-09-09T00:42:53.228694085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:42:53.229675 env[1317]: time="2025-09-09T00:42:53.229635227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 00:42:57.557404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:42:57.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:57.557576 systemd[1]: Stopped kubelet.service. Sep 9 00:42:57.559023 systemd[1]: Starting kubelet.service... Sep 9 00:42:57.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:57.561958 kernel: audit: type=1130 audit(1757378577.556:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:57.562037 kernel: audit: type=1131 audit(1757378577.556:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:57.671594 systemd[1]: Started kubelet.service. Sep 9 00:42:57.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:57.674010 kernel: audit: type=1130 audit(1757378577.670:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:57.715728 kubelet[1669]: E0909 00:42:57.715687 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:42:57.717717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:42:57.717863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:42:57.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 9 00:42:57.721001 kernel: audit: type=1131 audit(1757378577.717:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 9 00:42:59.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.043950 systemd[1]: Stopped kubelet.service. Sep 9 00:42:59.045914 systemd[1]: Starting kubelet.service... Sep 9 00:42:59.047986 kernel: audit: type=1130 audit(1757378579.043:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.048040 kernel: audit: type=1131 audit(1757378579.043:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.080373 systemd[1]: Reloading. Sep 9 00:42:59.122966 /usr/lib/systemd/system-generators/torcx-generator[1704]: time="2025-09-09T00:42:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:42:59.123008 /usr/lib/systemd/system-generators/torcx-generator[1704]: time="2025-09-09T00:42:59Z" level=info msg="torcx already run" Sep 9 00:42:59.211030 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:42:59.211203 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:42:59.226575 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:42:59.296302 systemd[1]: Started kubelet.service. Sep 9 00:42:59.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.300007 kernel: audit: type=1130 audit(1757378579.296:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.301411 systemd[1]: Stopping kubelet.service... Sep 9 00:42:59.302380 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:42:59.302616 systemd[1]: Stopped kubelet.service. Sep 9 00:42:59.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.304077 systemd[1]: Starting kubelet.service... Sep 9 00:42:59.305008 kernel: audit: type=1131 audit(1757378579.301:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.394781 systemd[1]: Started kubelet.service. Sep 9 00:42:59.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.398012 kernel: audit: type=1130 audit(1757378579.394:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:42:59.427891 kubelet[1768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:42:59.427891 kubelet[1768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:42:59.427891 kubelet[1768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:42:59.428241 kubelet[1768]: I0909 00:42:59.427958 1768 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:43:00.317295 kubelet[1768]: I0909 00:43:00.317246 1768 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:43:00.317295 kubelet[1768]: I0909 00:43:00.317283 1768 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:43:00.317682 kubelet[1768]: I0909 00:43:00.317665 1768 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:43:00.338678 kubelet[1768]: E0909 00:43:00.338643 1768 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:43:00.344547 kubelet[1768]: I0909 00:43:00.344505 1768 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:43:00.351254 kubelet[1768]: E0909 00:43:00.351216 1768 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:43:00.351254 kubelet[1768]: I0909 00:43:00.351249 1768 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:43:00.355536 kubelet[1768]: I0909 00:43:00.355502 1768 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:43:00.359668 kubelet[1768]: I0909 00:43:00.359640 1768 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:43:00.359820 kubelet[1768]: I0909 00:43:00.359781 1768 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:43:00.360001 kubelet[1768]: I0909 00:43:00.359813 1768 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 9 00:43:00.360103 kubelet[1768]: I0909 00:43:00.360063 1768 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:43:00.360103 kubelet[1768]: I0909 00:43:00.360074 1768 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:43:00.360258 kubelet[1768]: I0909 00:43:00.360243 1768 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:43:00.365472 kubelet[1768]: I0909 00:43:00.365441 1768 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:43:00.365574 kubelet[1768]: I0909 00:43:00.365493 1768 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:43:00.365574 kubelet[1768]: I0909 00:43:00.365527 1768 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:43:00.365622 kubelet[1768]: I0909 00:43:00.365606 1768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:43:00.367094 kubelet[1768]: W0909 00:43:00.367040 1768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 9 00:43:00.367215 kubelet[1768]: E0909 00:43:00.367193 1768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:43:00.367726 kubelet[1768]: W0909 00:43:00.367688 1768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 9 00:43:00.367884 kubelet[1768]: E0909 00:43:00.367864 1768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:43:00.403225 kubelet[1768]: I0909 00:43:00.403203 1768 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:43:00.404081 kubelet[1768]: I0909 00:43:00.404063 1768 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:43:00.404311 kubelet[1768]: W0909 00:43:00.404300 1768 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:43:00.405327 kubelet[1768]: I0909 00:43:00.405306 1768 server.go:1274] "Started kubelet" Sep 9 00:43:00.407447 kubelet[1768]: I0909 00:43:00.407406 1768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:43:00.406000 audit[1768]: AVC avc: denied { mac_admin } for pid=1768 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:00.409103 kubelet[1768]: I0909 00:43:00.409038 1768 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 9 00:43:00.409199 kubelet[1768]: I0909 00:43:00.409182 1768 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 9 00:43:00.409338 kubelet[1768]: I0909 00:43:00.409326 1768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:43:00.410302 kubelet[1768]: I0909 00:43:00.410271 1768 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:43:00.411002 kernel: audit: type=1400 audit(1757378580.406:202): avc: denied { mac_admin } for pid=1768 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:00.406000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 9 00:43:00.406000 audit[1768]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a66060 a1=4000a64318 a2=4000a66030 a3=25 items=0 ppid=1 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.406000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 9 00:43:00.408000 audit[1768]: AVC avc: denied { mac_admin } for pid=1768 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:00.408000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 9 00:43:00.408000 audit[1768]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40005adb80 a1=4000a64330 a2=4000a660f0 a3=25 items=0 ppid=1 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 9 00:43:00.411788 kubelet[1768]: I0909 00:43:00.411752 1768 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:43:00.411000 audit[1782]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:00.411000 audit[1782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffde47f420 a2=0 a3=1 items=0 ppid=1768 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 9 00:43:00.413807 kubelet[1768]: I0909 00:43:00.413786 1768 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:43:00.414577 kubelet[1768]: I0909 00:43:00.414558 1768 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:43:00.414776 kubelet[1768]: I0909 00:43:00.414762 1768 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:43:00.415069 kubelet[1768]: E0909 00:43:00.415043 1768 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:43:00.414000 audit[1783]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:00.414000 audit[1783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb947560 a2=0 a3=1 items=0 ppid=1768 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 9 00:43:00.415877 kubelet[1768]: W0909 00:43:00.415844 1768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 9 00:43:00.415996 kubelet[1768]: E0909 00:43:00.415966 1768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:43:00.416061 kubelet[1768]: E0909 00:43:00.415896 1768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Sep 9 00:43:00.417059 kubelet[1768]: I0909 00:43:00.405545 1768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:43:00.417493 kubelet[1768]: I0909 00:43:00.417463 1768 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:43:00.416000 audit[1785]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:00.416000 audit[1785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd607c260 a2=0 a3=1 items=0 ppid=1768 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 9 00:43:00.420000 audit[1787]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:00.420000 audit[1787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffddc5ff80 a2=0 a3=1 items=0 ppid=1768 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 9 00:43:00.422195 kubelet[1768]: I0909 00:43:00.422170 1768 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:43:00.422267 kubelet[1768]: I0909 00:43:00.422229 1768 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:43:00.422329 kubelet[1768]: I0909 00:43:00.422308 1768 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:43:00.422422 kubelet[1768]: E0909 00:43:00.420518 1768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863768204efc25f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:43:00.405273183 +0000 UTC m=+1.006969835,LastTimestamp:2025-09-09 00:43:00.405273183 +0000 UTC m=+1.006969835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:43:00.423725 kubelet[1768]: E0909 00:43:00.423696 1768 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:43:00.432000 audit[1793]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:00.432000 audit[1793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff7965970 a2=0 a3=1 items=0 ppid=1768 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 9 00:43:00.433384 kubelet[1768]: I0909 00:43:00.433352 1768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:43:00.433000 audit[1794]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:00.433000 audit[1794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdfbf51b0 a2=0 a3=1 items=0 ppid=1768 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 9 00:43:00.433000 audit[1795]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:00.433000 audit[1795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8f16940 a2=0 a3=1 items=0 ppid=1768 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 9 00:43:00.434874 kubelet[1768]: I0909 00:43:00.434857 1768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:43:00.434957 kubelet[1768]: I0909 00:43:00.434946 1768 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:43:00.435052 kubelet[1768]: I0909 00:43:00.435041 1768 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:43:00.435144 kubelet[1768]: E0909 00:43:00.435128 1768 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:43:00.434000 audit[1797]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:00.434000 audit[1797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb048170 a2=0 a3=1 items=0 ppid=1768 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 9 00:43:00.435000 audit[1796]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:00.435000 audit[1796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9f7e2e0 a2=0 a3=1 items=0 ppid=1768 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 9 00:43:00.436351 kubelet[1768]: W0909 00:43:00.436313 1768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 9 00:43:00.436450 kubelet[1768]: E0909 00:43:00.436433 1768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:43:00.435000 audit[1799]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:00.435000 audit[1799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffff3a24870 a2=0 a3=1 items=0 ppid=1768 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 9 00:43:00.436000 audit[1800]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:00.436000 audit[1800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd5a48bf0 a2=0 a3=1 items=0 ppid=1768 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 9 00:43:00.436000 audit[1802]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:00.436000 audit[1802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff98f3af0 a2=0 a3=1 items=0 ppid=1768 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 9 00:43:00.439025 kubelet[1768]: I0909 00:43:00.439006 1768 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:43:00.439025 kubelet[1768]: I0909 00:43:00.439023 1768 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:43:00.439118 kubelet[1768]: I0909 00:43:00.439040 1768 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:43:00.516675 kubelet[1768]: E0909 00:43:00.516633 1768 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:43:00.532930 kubelet[1768]: I0909 00:43:00.532903 1768 policy_none.go:49] "None policy: Start" Sep 9 00:43:00.533676 kubelet[1768]: I0909 00:43:00.533663 1768 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:43:00.533794 kubelet[1768]: I0909 00:43:00.533783 1768 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:43:00.535493 kubelet[1768]: E0909 00:43:00.535453 1768 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:43:00.538666 kubelet[1768]: I0909 00:43:00.538646 1768 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:43:00.538000 audit[1768]: AVC avc: denied { mac_admin } for pid=1768 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:00.538000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 9 00:43:00.538000 audit[1768]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000fc9bc0 a1=4000fc6a68 a2=4000fc9b90 a3=25 items=0 ppid=1 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:00.538000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 9 00:43:00.539574 kubelet[1768]: I0909 00:43:00.539554 1768 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 9 00:43:00.539775 kubelet[1768]: I0909 00:43:00.539762 1768 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:43:00.539877 kubelet[1768]: I0909 00:43:00.539847 1768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:43:00.540097 kubelet[1768]: I0909 00:43:00.540083 1768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:43:00.540963 kubelet[1768]: E0909 00:43:00.540944 1768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:43:00.616639 kubelet[1768]: E0909 00:43:00.616557 1768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Sep 9 00:43:00.641697 kubelet[1768]: I0909 00:43:00.641667 1768 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:43:00.642122 kubelet[1768]: E0909 00:43:00.642099 1768 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Sep 9 00:43:00.819275 kubelet[1768]: I0909 00:43:00.819245 1768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5fdec7017aeb83beb4d784ce28e6f78-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c5fdec7017aeb83beb4d784ce28e6f78\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:43:00.819484 kubelet[1768]: I0909 00:43:00.819468 1768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5fdec7017aeb83beb4d784ce28e6f78-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c5fdec7017aeb83beb4d784ce28e6f78\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:43:00.819570 kubelet[1768]: I0909 00:43:00.819557 1768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:00.819682 kubelet[1768]: I0909 00:43:00.819669 1768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:00.819778 kubelet[1768]: I0909 00:43:00.819765 1768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:00.819870 kubelet[1768]: I0909 00:43:00.819858 1768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:00.819966 kubelet[1768]: I0909 00:43:00.819952 1768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5fdec7017aeb83beb4d784ce28e6f78-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c5fdec7017aeb83beb4d784ce28e6f78\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:43:00.820097 kubelet[1768]: I0909 00:43:00.820083 1768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:00.820211 kubelet[1768]: I0909 00:43:00.820198 1768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:43:00.843425 kubelet[1768]: I0909 00:43:00.843388 1768 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:43:00.843896 kubelet[1768]: E0909 00:43:00.843853 1768 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Sep 9 00:43:01.017278 kubelet[1768]: E0909 00:43:01.017174 1768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Sep 9 00:43:01.042587 kubelet[1768]: E0909 00:43:01.042562 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:01.043216 kubelet[1768]: E0909 00:43:01.043152 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:01.043314 env[1317]: time="2025-09-09T00:43:01.043279695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c5fdec7017aeb83beb4d784ce28e6f78,Namespace:kube-system,Attempt:0,}" Sep 9 00:43:01.043575 env[1317]: time="2025-09-09T00:43:01.043496853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 00:43:01.044784 kubelet[1768]: E0909 00:43:01.044761 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:01.045100 env[1317]: time="2025-09-09T00:43:01.045070064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 00:43:01.225203 kubelet[1768]: W0909 00:43:01.225142 1768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 9 00:43:01.225203 kubelet[1768]: E0909 00:43:01.225206 1768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:43:01.245528 kubelet[1768]: I0909 00:43:01.245493 1768 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:43:01.245995 kubelet[1768]: E0909 00:43:01.245926 1768 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Sep 9 00:43:01.428123 kubelet[1768]: W0909 00:43:01.428057 1768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 9 00:43:01.428123 kubelet[1768]: E0909 00:43:01.428125 1768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:43:01.538093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965867349.mount: Deactivated successfully. Sep 9 00:43:01.541963 env[1317]: time="2025-09-09T00:43:01.541900387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.547442 env[1317]: time="2025-09-09T00:43:01.547392909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.548415 env[1317]: time="2025-09-09T00:43:01.548380195Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.550659 env[1317]: time="2025-09-09T00:43:01.550616276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.552216 env[1317]: time="2025-09-09T00:43:01.552180769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.553188 env[1317]: time="2025-09-09T00:43:01.553160297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.555602 env[1317]: time="2025-09-09T00:43:01.555549468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.557728 env[1317]: time="2025-09-09T00:43:01.557694327Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.558549 env[1317]: time="2025-09-09T00:43:01.558525084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.560192 env[1317]: time="2025-09-09T00:43:01.560165482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.561058 env[1317]: time="2025-09-09T00:43:01.561028992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.562919 env[1317]: time="2025-09-09T00:43:01.562886628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:01.580338 env[1317]: time="2025-09-09T00:43:01.580210867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:01.580338 env[1317]: time="2025-09-09T00:43:01.580265497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:01.580338 env[1317]: time="2025-09-09T00:43:01.580276015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:01.580614 env[1317]: time="2025-09-09T00:43:01.580569837Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88d8bc764a32df6b85a91b2abc2065a7fc3b2310f91ec93cf1fa07891416bfaa pid=1815 runtime=io.containerd.runc.v2 Sep 9 00:43:01.584338 env[1317]: time="2025-09-09T00:43:01.584072989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:01.584338 env[1317]: time="2025-09-09T00:43:01.584108302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:01.584338 env[1317]: time="2025-09-09T00:43:01.584117941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:01.584338 env[1317]: time="2025-09-09T00:43:01.584221600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea12f3e2a26805fae237e604b86d3e67f6749bf19587af2811d53ee4bca1ae98 pid=1816 runtime=io.containerd.runc.v2 Sep 9 00:43:01.587859 env[1317]: time="2025-09-09T00:43:01.587716074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:01.587983 env[1317]: time="2025-09-09T00:43:01.587839370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:01.587983 env[1317]: time="2025-09-09T00:43:01.587868444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:01.588165 env[1317]: time="2025-09-09T00:43:01.588055688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c88059b3d81847207618c1514822ad9059da7edcc22ea83ffd7c349feca4815 pid=1852 runtime=io.containerd.runc.v2 Sep 9 00:43:01.595689 kubelet[1768]: W0909 00:43:01.595581 1768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 9 00:43:01.595689 kubelet[1768]: E0909 00:43:01.595653 1768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:43:01.633278 env[1317]: time="2025-09-09T00:43:01.633205946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"88d8bc764a32df6b85a91b2abc2065a7fc3b2310f91ec93cf1fa07891416bfaa\"" Sep 9 00:43:01.637133 kubelet[1768]: E0909 00:43:01.637094 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:01.639501 env[1317]: time="2025-09-09T00:43:01.639464117Z" level=info msg="CreateContainer within sandbox \"88d8bc764a32df6b85a91b2abc2065a7fc3b2310f91ec93cf1fa07891416bfaa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:43:01.649827 env[1317]: time="2025-09-09T00:43:01.649019362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea12f3e2a26805fae237e604b86d3e67f6749bf19587af2811d53ee4bca1ae98\"" Sep 9 00:43:01.649827 env[1317]: time="2025-09-09T00:43:01.649278351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c5fdec7017aeb83beb4d784ce28e6f78,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c88059b3d81847207618c1514822ad9059da7edcc22ea83ffd7c349feca4815\"" Sep 9 00:43:01.650688 kubelet[1768]: E0909 00:43:01.650666 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:01.651275 env[1317]: time="2025-09-09T00:43:01.651229968Z" level=info msg="CreateContainer within sandbox \"88d8bc764a32df6b85a91b2abc2065a7fc3b2310f91ec93cf1fa07891416bfaa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"feff4af24f4f031ddb7dde023448773fff42f5df39ec552547a27a8e2234bdc7\"" Sep 9 00:43:01.651597 kubelet[1768]: E0909 00:43:01.651574 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:01.651868 env[1317]: time="2025-09-09T00:43:01.651841968Z" level=info msg="StartContainer for \"feff4af24f4f031ddb7dde023448773fff42f5df39ec552547a27a8e2234bdc7\"" Sep 9 00:43:01.652133 env[1317]: time="2025-09-09T00:43:01.652104516Z" level=info msg="CreateContainer within sandbox \"ea12f3e2a26805fae237e604b86d3e67f6749bf19587af2811d53ee4bca1ae98\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:43:01.652827 env[1317]: time="2025-09-09T00:43:01.652799300Z" level=info msg="CreateContainer within sandbox \"6c88059b3d81847207618c1514822ad9059da7edcc22ea83ffd7c349feca4815\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:43:01.669875 env[1317]: time="2025-09-09T00:43:01.669832477Z" level=info msg="CreateContainer within sandbox \"6c88059b3d81847207618c1514822ad9059da7edcc22ea83ffd7c349feca4815\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9bad5f3462ecd37b71ea87c2ee3e7f4c99ba784c74650e9a7384cc490ed6a71\"" Sep 9 00:43:01.671347 env[1317]: time="2025-09-09T00:43:01.671318625Z" level=info msg="StartContainer for \"f9bad5f3462ecd37b71ea87c2ee3e7f4c99ba784c74650e9a7384cc490ed6a71\"" Sep 9 00:43:01.672508 env[1317]: time="2025-09-09T00:43:01.672476518Z" level=info msg="CreateContainer within sandbox \"ea12f3e2a26805fae237e604b86d3e67f6749bf19587af2811d53ee4bca1ae98\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ff417e88aa81e5ccbde16799c0653df92732cf724f37b52bcf23d7d977eba87\"" Sep 9 00:43:01.672825 env[1317]: time="2025-09-09T00:43:01.672797415Z" level=info msg="StartContainer for \"8ff417e88aa81e5ccbde16799c0653df92732cf724f37b52bcf23d7d977eba87\"" Sep 9 00:43:01.720580 env[1317]: time="2025-09-09T00:43:01.719636901Z" level=info msg="StartContainer for \"feff4af24f4f031ddb7dde023448773fff42f5df39ec552547a27a8e2234bdc7\" returns successfully" Sep 9 00:43:01.754469 env[1317]: time="2025-09-09T00:43:01.754337410Z" level=info msg="StartContainer for \"f9bad5f3462ecd37b71ea87c2ee3e7f4c99ba784c74650e9a7384cc490ed6a71\" returns successfully" Sep 9 00:43:01.754969 env[1317]: time="2025-09-09T00:43:01.754934693Z" level=info msg="StartContainer for \"8ff417e88aa81e5ccbde16799c0653df92732cf724f37b52bcf23d7d977eba87\" returns successfully" Sep 9 00:43:01.818126 kubelet[1768]: E0909 00:43:01.818014 1768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="1.6s" Sep 9 00:43:02.047866 kubelet[1768]: I0909 00:43:02.047769 1768 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:43:02.442141 kubelet[1768]: E0909 00:43:02.442112 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:02.443452 kubelet[1768]: E0909 00:43:02.443428 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:02.445425 kubelet[1768]: E0909 00:43:02.445376 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:03.447502 kubelet[1768]: E0909 00:43:03.447430 1768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:04.070418 kubelet[1768]: E0909 00:43:04.070369 1768 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:43:04.113721 kubelet[1768]: E0909 00:43:04.113581 1768 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863768204efc25f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:43:00.405273183 +0000 UTC m=+1.006969835,LastTimestamp:2025-09-09 00:43:00.405273183 +0000 UTC m=+1.006969835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:43:04.166693 kubelet[1768]: I0909 00:43:04.166603 1768 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:43:04.166853 kubelet[1768]: E0909 00:43:04.166748 1768 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18637682052ec021 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:43:00.409401377 +0000 UTC m=+1.011098069,LastTimestamp:2025-09-09 00:43:00.409401377 +0000 UTC m=+1.011098069,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:43:04.368416 kubelet[1768]: I0909 00:43:04.368118 1768 apiserver.go:52] "Watching apiserver" Sep 9 00:43:04.414371 kubelet[1768]: I0909 00:43:04.414339 1768 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:43:06.339973 systemd[1]: Reloading. Sep 9 00:43:06.382602 /usr/lib/systemd/system-generators/torcx-generator[2064]: time="2025-09-09T00:43:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:43:06.382631 /usr/lib/systemd/system-generators/torcx-generator[2064]: time="2025-09-09T00:43:06Z" level=info msg="torcx already run" Sep 9 00:43:06.449383 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:43:06.449404 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:43:06.465482 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:43:06.545177 systemd[1]: Stopping kubelet.service... Sep 9 00:43:06.569299 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:43:06.569599 systemd[1]: Stopped kubelet.service. Sep 9 00:43:06.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:06.570185 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 9 00:43:06.570222 kernel: audit: type=1131 audit(1757378586.568:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:06.571212 systemd[1]: Starting kubelet.service... Sep 9 00:43:06.666970 systemd[1]: Started kubelet.service. Sep 9 00:43:06.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:06.671003 kernel: audit: type=1130 audit(1757378586.665:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:06.707245 kubelet[2118]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:43:06.707583 kubelet[2118]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:43:06.707652 kubelet[2118]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:43:06.707786 kubelet[2118]: I0909 00:43:06.707757 2118 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:43:06.713370 kubelet[2118]: I0909 00:43:06.713333 2118 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:43:06.713370 kubelet[2118]: I0909 00:43:06.713361 2118 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:43:06.713585 kubelet[2118]: I0909 00:43:06.713569 2118 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:43:06.714843 kubelet[2118]: I0909 00:43:06.714828 2118 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:43:06.716758 kubelet[2118]: I0909 00:43:06.716733 2118 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:43:06.729753 kubelet[2118]: E0909 00:43:06.729702 2118 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:43:06.729753 kubelet[2118]: I0909 00:43:06.729741 2118 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:43:06.732060 kubelet[2118]: I0909 00:43:06.732036 2118 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:43:06.732364 kubelet[2118]: I0909 00:43:06.732347 2118 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:43:06.732470 kubelet[2118]: I0909 00:43:06.732446 2118 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:43:06.733005 kubelet[2118]: I0909 00:43:06.732472 2118 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 9 00:43:06.733005 kubelet[2118]: I0909 00:43:06.732736 2118 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:43:06.733005 kubelet[2118]: I0909 00:43:06.732746 2118 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:43:06.733005 kubelet[2118]: I0909 00:43:06.732778 2118 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:43:06.733005 kubelet[2118]: I0909 00:43:06.732887 2118 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:43:06.733493 kubelet[2118]: I0909 00:43:06.732900 2118 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:43:06.733493 kubelet[2118]: I0909 00:43:06.732916 2118 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:43:06.733493 kubelet[2118]: I0909 00:43:06.733329 2118 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:43:06.736000 audit[2118]: AVC avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:06.736000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.734212 2118 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.734670 2118 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.735033 2118 server.go:1274] "Started kubelet" Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.736608 2118 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.737214 2118 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.737371 2118 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.737386 2118 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.738531 2118 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.738569 2118 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 9 00:43:06.743924 kubelet[2118]: I0909 00:43:06.738590 2118 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:43:06.744155 kubelet[2118]: I0909 00:43:06.744060 2118 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:43:06.744633 kernel: audit: type=1400 audit(1757378586.736:219): avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:06.744704 kernel: audit: type=1401 audit(1757378586.736:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 9 00:43:06.744722 kernel: audit: type=1300 audit(1757378586.736:219): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bf6ae0 a1=4000bea6c0 a2=4000bf6ab0 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:06.736000 audit[2118]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bf6ae0 a1=4000bea6c0 a2=4000bf6ab0 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:06.745296 kubelet[2118]: I0909 00:43:06.745273 2118 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:43:06.745468 kubelet[2118]: I0909 00:43:06.745453 2118 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:43:06.745902 kubelet[2118]: I0909 00:43:06.745886 2118 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:43:06.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 9 00:43:06.750901 kernel: audit: type=1327 audit(1757378586.736:219): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 9 00:43:06.750943 kernel: audit: type=1400 audit(1757378586.736:220): avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:06.736000 audit[2118]: AVC avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:06.751752 kubelet[2118]: I0909 00:43:06.751727 2118 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:43:06.752059 kubelet[2118]: I0909 00:43:06.752036 2118 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:43:06.736000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 9 00:43:06.755831 kernel: audit: type=1401 audit(1757378586.736:220): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 9 00:43:06.760665 kernel: audit: type=1300 audit(1757378586.736:220): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bcac00 a1=4000bea6d8 a2=4000bf6b70 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:06.760733 kernel: audit: type=1327 audit(1757378586.736:220): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 9 00:43:06.736000 audit[2118]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bcac00 a1=4000bea6d8 a2=4000bf6b70 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:06.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 9 00:43:06.761049 kubelet[2118]: I0909 00:43:06.757638 2118 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:43:06.761049 kubelet[2118]: E0909 00:43:06.758890 2118 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:43:06.777911 kubelet[2118]: I0909 00:43:06.777880 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:43:06.778933 kubelet[2118]: I0909 00:43:06.778913 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:43:06.779045 kubelet[2118]: I0909 00:43:06.779032 2118 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:43:06.779114 kubelet[2118]: I0909 00:43:06.779104 2118 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:43:06.779216 kubelet[2118]: E0909 00:43:06.779197 2118 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:43:06.807470 kubelet[2118]: I0909 00:43:06.807443 2118 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:43:06.807470 kubelet[2118]: I0909 00:43:06.807464 2118 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:43:06.807633 kubelet[2118]: I0909 00:43:06.807484 2118 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:43:06.807672 kubelet[2118]: I0909 00:43:06.807651 2118 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:43:06.807699 kubelet[2118]: I0909 00:43:06.807668 2118 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:43:06.807699 kubelet[2118]: I0909 00:43:06.807691 2118 policy_none.go:49] "None policy: Start" Sep 9 00:43:06.808444 kubelet[2118]: I0909 00:43:06.808426 2118 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:43:06.808519 kubelet[2118]: I0909 00:43:06.808462 2118 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:43:06.808637 kubelet[2118]: I0909 00:43:06.808619 2118 state_mem.go:75] "Updated machine memory state" Sep 9 00:43:06.809794 kubelet[2118]: I0909 00:43:06.809757 2118 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:43:06.807000 audit[2118]: AVC avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:06.807000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 9 00:43:06.807000 audit[2118]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400116f020 a1=400116aac8 a2=400116eff0 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:06.807000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 9 00:43:06.810121 kubelet[2118]: I0909 00:43:06.809823 2118 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 9 00:43:06.810121 kubelet[2118]: I0909 00:43:06.809959 2118 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:43:06.810121 kubelet[2118]: I0909 00:43:06.809970 2118 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:43:06.810702 kubelet[2118]: I0909 00:43:06.810674 2118 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:43:06.913916 kubelet[2118]: I0909 00:43:06.913880 2118 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:43:06.920095 kubelet[2118]: I0909 00:43:06.919800 2118 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 00:43:06.920095 kubelet[2118]: I0909 00:43:06.919886 2118 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:43:06.947413 kubelet[2118]: I0909 00:43:06.947376 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5fdec7017aeb83beb4d784ce28e6f78-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c5fdec7017aeb83beb4d784ce28e6f78\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:43:06.947627 kubelet[2118]: I0909 00:43:06.947598 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5fdec7017aeb83beb4d784ce28e6f78-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c5fdec7017aeb83beb4d784ce28e6f78\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:43:06.947709 kubelet[2118]: I0909 00:43:06.947695 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:06.947805 kubelet[2118]: I0909 00:43:06.947791 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:06.947880 kubelet[2118]: I0909 00:43:06.947868 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:43:06.947954 kubelet[2118]: I0909 00:43:06.947942 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5fdec7017aeb83beb4d784ce28e6f78-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c5fdec7017aeb83beb4d784ce28e6f78\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:43:06.948055 kubelet[2118]: I0909 00:43:06.948042 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:06.948133 kubelet[2118]: I0909 00:43:06.948120 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:06.948219 kubelet[2118]: I0909 00:43:06.948206 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:07.190065 kubelet[2118]: E0909 00:43:07.189970 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:07.190065 kubelet[2118]: E0909 00:43:07.189996 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:07.190187 kubelet[2118]: E0909 00:43:07.190068 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:07.734533 kubelet[2118]: I0909 00:43:07.734491 2118 apiserver.go:52] "Watching apiserver" Sep 9 00:43:07.745838 kubelet[2118]: I0909 00:43:07.745809 2118 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:43:07.789664 kubelet[2118]: E0909 00:43:07.789634 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:07.789763 kubelet[2118]: E0909 00:43:07.789654 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:07.806005 kubelet[2118]: E0909 00:43:07.801032 2118 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:43:07.806005 kubelet[2118]: E0909 00:43:07.801241 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:07.821213 kubelet[2118]: I0909 00:43:07.820708 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.820693004 podStartE2EDuration="1.820693004s" podCreationTimestamp="2025-09-09 00:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:43:07.813742944 +0000 UTC m=+1.141729676" watchObservedRunningTime="2025-09-09 00:43:07.820693004 +0000 UTC m=+1.148679736" Sep 9 00:43:07.821213 kubelet[2118]: I0909 00:43:07.820820 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.820815434 podStartE2EDuration="1.820815434s" podCreationTimestamp="2025-09-09 00:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:43:07.820594933 +0000 UTC m=+1.148581665" watchObservedRunningTime="2025-09-09 00:43:07.820815434 +0000 UTC m=+1.148802126" Sep 9 00:43:07.829239 kubelet[2118]: I0909 00:43:07.829195 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.829183128 podStartE2EDuration="1.829183128s" podCreationTimestamp="2025-09-09 00:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:43:07.829008464 +0000 UTC m=+1.156995196" watchObservedRunningTime="2025-09-09 00:43:07.829183128 +0000 UTC m=+1.157169860" Sep 9 00:43:08.791156 kubelet[2118]: E0909 00:43:08.791122 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:08.791156 kubelet[2118]: E0909 00:43:08.791140 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:08.791522 kubelet[2118]: E0909 00:43:08.791203 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:12.320704 kubelet[2118]: I0909 00:43:12.320672 2118 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:43:12.321068 env[1317]: time="2025-09-09T00:43:12.320990126Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:43:12.321245 kubelet[2118]: I0909 00:43:12.321219 2118 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:43:12.901897 kubelet[2118]: E0909 00:43:12.901843 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:13.189891 kubelet[2118]: I0909 00:43:13.189786 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hhm\" (UniqueName: \"kubernetes.io/projected/c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d-kube-api-access-h4hhm\") pod \"kube-proxy-8jdlq\" (UID: \"c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d\") " pod="kube-system/kube-proxy-8jdlq" Sep 9 00:43:13.189891 kubelet[2118]: I0909 00:43:13.189838 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d-kube-proxy\") pod \"kube-proxy-8jdlq\" (UID: \"c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d\") " pod="kube-system/kube-proxy-8jdlq" Sep 9 00:43:13.189891 kubelet[2118]: I0909 00:43:13.189859 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d-xtables-lock\") pod \"kube-proxy-8jdlq\" (UID: \"c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d\") " pod="kube-system/kube-proxy-8jdlq" Sep 9 00:43:13.189891 kubelet[2118]: I0909 00:43:13.189873 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d-lib-modules\") pod \"kube-proxy-8jdlq\" (UID: \"c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d\") " pod="kube-system/kube-proxy-8jdlq" Sep 9 00:43:13.297484 kubelet[2118]: E0909 00:43:13.297452 2118 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:43:13.297630 kubelet[2118]: E0909 00:43:13.297618 2118 projected.go:194] Error preparing data for projected volume kube-api-access-h4hhm for pod kube-system/kube-proxy-8jdlq: configmap "kube-root-ca.crt" not found Sep 9 00:43:13.297750 kubelet[2118]: E0909 00:43:13.297735 2118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d-kube-api-access-h4hhm podName:c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d nodeName:}" failed. No retries permitted until 2025-09-09 00:43:13.797712773 +0000 UTC m=+7.125699465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h4hhm" (UniqueName: "kubernetes.io/projected/c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d-kube-api-access-h4hhm") pod "kube-proxy-8jdlq" (UID: "c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d") : configmap "kube-root-ca.crt" not found Sep 9 00:43:13.593016 kubelet[2118]: I0909 00:43:13.592950 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl977\" (UniqueName: \"kubernetes.io/projected/81e9357f-d856-4f5f-8e8f-677c63eb7ef9-kube-api-access-bl977\") pod \"tigera-operator-58fc44c59b-rcrrz\" (UID: \"81e9357f-d856-4f5f-8e8f-677c63eb7ef9\") " pod="tigera-operator/tigera-operator-58fc44c59b-rcrrz" Sep 9 00:43:13.593405 kubelet[2118]: I0909 00:43:13.593029 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81e9357f-d856-4f5f-8e8f-677c63eb7ef9-var-lib-calico\") pod \"tigera-operator-58fc44c59b-rcrrz\" (UID: \"81e9357f-d856-4f5f-8e8f-677c63eb7ef9\") " pod="tigera-operator/tigera-operator-58fc44c59b-rcrrz" Sep 9 00:43:13.700985 kubelet[2118]: I0909 00:43:13.700932 2118 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 9 00:43:13.798292 kubelet[2118]: E0909 00:43:13.798260 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:13.825940 env[1317]: time="2025-09-09T00:43:13.825889137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-rcrrz,Uid:81e9357f-d856-4f5f-8e8f-677c63eb7ef9,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:43:13.841284 env[1317]: time="2025-09-09T00:43:13.841221313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:13.841284 env[1317]: time="2025-09-09T00:43:13.841259473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:13.841454 env[1317]: time="2025-09-09T00:43:13.841269993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:13.841672 env[1317]: time="2025-09-09T00:43:13.841634230Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e36a70eceb33b7ee2a2160a7c7fb5fff4ede7f3857c7ef045e8a4e9701a1cfc pid=2176 runtime=io.containerd.runc.v2 Sep 9 00:43:13.885879 env[1317]: time="2025-09-09T00:43:13.885778570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-rcrrz,Uid:81e9357f-d856-4f5f-8e8f-677c63eb7ef9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3e36a70eceb33b7ee2a2160a7c7fb5fff4ede7f3857c7ef045e8a4e9701a1cfc\"" Sep 9 00:43:13.888442 env[1317]: time="2025-09-09T00:43:13.888412952Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:43:14.070572 kubelet[2118]: E0909 00:43:14.068507 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:14.070712 env[1317]: time="2025-09-09T00:43:14.069170745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8jdlq,Uid:c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d,Namespace:kube-system,Attempt:0,}" Sep 9 00:43:14.083232 env[1317]: time="2025-09-09T00:43:14.082959856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:14.083232 env[1317]: time="2025-09-09T00:43:14.083008736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:14.083232 env[1317]: time="2025-09-09T00:43:14.083018695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:14.083232 env[1317]: time="2025-09-09T00:43:14.083141695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5418701419248333debad89d8e32bad35f4669ba9d5485ad5621822b54d6b2d8 pid=2218 runtime=io.containerd.runc.v2 Sep 9 00:43:14.124439 env[1317]: time="2025-09-09T00:43:14.124382789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8jdlq,Uid:c8e0c1d4-ffb5-4b6a-8e9c-eed397eacc6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5418701419248333debad89d8e32bad35f4669ba9d5485ad5621822b54d6b2d8\"" Sep 9 00:43:14.126580 kubelet[2118]: E0909 00:43:14.126263 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:14.131530 env[1317]: time="2025-09-09T00:43:14.131480463Z" level=info msg="CreateContainer within sandbox \"5418701419248333debad89d8e32bad35f4669ba9d5485ad5621822b54d6b2d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:43:14.151557 env[1317]: time="2025-09-09T00:43:14.151193376Z" level=info msg="CreateContainer within sandbox \"5418701419248333debad89d8e32bad35f4669ba9d5485ad5621822b54d6b2d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a06804840b357b1f2d2ce2058d54d7960bc8e8b388fd3bea9bcbc0f9bbe594e\"" Sep 9 00:43:14.152123 env[1317]: time="2025-09-09T00:43:14.152093730Z" level=info msg="StartContainer for \"0a06804840b357b1f2d2ce2058d54d7960bc8e8b388fd3bea9bcbc0f9bbe594e\"" Sep 9 00:43:14.211292 env[1317]: time="2025-09-09T00:43:14.211250749Z" level=info msg="StartContainer for \"0a06804840b357b1f2d2ce2058d54d7960bc8e8b388fd3bea9bcbc0f9bbe594e\" returns successfully" Sep 9 00:43:14.355010 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 9 00:43:14.355097 kernel: audit: type=1325 audit(1757378594.351:222): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.355137 kernel: audit: type=1300 audit(1757378594.351:222): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5f92820 a2=0 a3=1 items=0 ppid=2268 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.351000 audit[2318]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.351000 audit[2318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5f92820 a2=0 a3=1 items=0 ppid=2268 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 9 00:43:14.358726 kernel: audit: type=1327 audit(1757378594.351:222): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 9 00:43:14.358826 kernel: audit: type=1325 audit(1757378594.351:223): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.351000 audit[2320]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.351000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc803ee40 a2=0 a3=1 items=0 ppid=2268 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.363214 kernel: audit: type=1300 audit(1757378594.351:223): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc803ee40 a2=0 a3=1 items=0 ppid=2268 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 9 00:43:14.364732 kernel: audit: type=1327 audit(1757378594.351:223): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 9 00:43:14.353000 audit[2321]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.366951 kernel: audit: type=1325 audit(1757378594.353:224): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.367001 kernel: audit: type=1300 audit(1757378594.353:224): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc722b0e0 a2=0 a3=1 items=0 ppid=2268 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.353000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc722b0e0 a2=0 a3=1 items=0 ppid=2268 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 9 00:43:14.371840 kernel: audit: type=1327 audit(1757378594.353:224): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 9 00:43:14.371902 kernel: audit: type=1325 audit(1757378594.353:225): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.353000 audit[2319]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.353000 audit[2319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc07be0e0 a2=0 a3=1 items=0 ppid=2268 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 9 00:43:14.353000 audit[2322]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.353000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffee9cb10 a2=0 a3=1 items=0 ppid=2268 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 9 00:43:14.356000 audit[2323]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.356000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcef92400 a2=0 a3=1 items=0 ppid=2268 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 9 00:43:14.453000 audit[2324]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.453000 audit[2324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff1ea71e0 a2=0 a3=1 items=0 ppid=2268 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 9 00:43:14.455000 audit[2326]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.455000 audit[2326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe8b04210 a2=0 a3=1 items=0 ppid=2268 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 9 00:43:14.458000 audit[2329]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.458000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc2ddeaf0 a2=0 a3=1 items=0 ppid=2268 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 9 00:43:14.459000 audit[2330]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.459000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0327e10 a2=0 a3=1 items=0 ppid=2268 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 9 00:43:14.461000 audit[2332]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.461000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff68e45f0 a2=0 a3=1 items=0 ppid=2268 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 9 00:43:14.462000 audit[2333]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.462000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7c49d90 a2=0 a3=1 items=0 ppid=2268 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 9 00:43:14.464000 audit[2335]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.464000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcb1aa5f0 a2=0 a3=1 items=0 ppid=2268 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 9 00:43:14.467000 audit[2338]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.467000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc5caa560 a2=0 a3=1 items=0 ppid=2268 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 9 00:43:14.468000 audit[2339]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.468000 audit[2339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2bba060 a2=0 a3=1 items=0 ppid=2268 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 9 00:43:14.471000 audit[2341]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.471000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdd8bd850 a2=0 a3=1 items=0 ppid=2268 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 9 00:43:14.472000 audit[2342]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.472000 audit[2342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd7cfc4b0 a2=0 a3=1 items=0 ppid=2268 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 9 00:43:14.474000 audit[2344]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.474000 audit[2344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb4c5c60 a2=0 a3=1 items=0 ppid=2268 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 9 00:43:14.477000 audit[2347]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.477000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd8011c0 a2=0 a3=1 items=0 ppid=2268 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.477000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 9 00:43:14.480000 audit[2350]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.480000 audit[2350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2f996c0 a2=0 a3=1 items=0 ppid=2268 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 9 00:43:14.481000 audit[2351]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.481000 audit[2351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcd7ee2a0 a2=0 a3=1 items=0 ppid=2268 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 9 00:43:14.483000 audit[2353]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.483000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc61427f0 a2=0 a3=1 items=0 ppid=2268 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 9 00:43:14.486000 audit[2356]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.486000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcf959e90 a2=0 a3=1 items=0 ppid=2268 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 9 00:43:14.487000 audit[2357]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.487000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2515ca0 a2=0 a3=1 items=0 ppid=2268 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 9 00:43:14.489000 audit[2359]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 9 00:43:14.489000 audit[2359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffec703b70 a2=0 a3=1 items=0 ppid=2268 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 9 00:43:14.508000 audit[2365]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:14.508000 audit[2365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd236e9e0 a2=0 a3=1 items=0 ppid=2268 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:14.518000 audit[2365]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:14.518000 audit[2365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd236e9e0 a2=0 a3=1 items=0 ppid=2268 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:14.520000 audit[2370]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.520000 audit[2370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc1ab1200 a2=0 a3=1 items=0 ppid=2268 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 9 00:43:14.522000 audit[2372]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.522000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd03a1e40 a2=0 a3=1 items=0 ppid=2268 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 9 00:43:14.525000 audit[2375]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.525000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc25a8f80 a2=0 a3=1 items=0 ppid=2268 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.525000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 9 00:43:14.526000 audit[2376]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.526000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1f6ec00 a2=0 a3=1 items=0 ppid=2268 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 9 00:43:14.528000 audit[2378]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.528000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd9d7cdc0 a2=0 a3=1 items=0 ppid=2268 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.528000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 9 00:43:14.529000 audit[2379]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.529000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc55740b0 a2=0 a3=1 items=0 ppid=2268 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 9 00:43:14.531000 audit[2381]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.531000 audit[2381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc0becfc0 a2=0 a3=1 items=0 ppid=2268 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 9 00:43:14.533000 audit[2384]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.533000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffff6f3770 a2=0 a3=1 items=0 ppid=2268 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.533000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 9 00:43:14.534000 audit[2385]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.534000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecf96530 a2=0 a3=1 items=0 ppid=2268 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 9 00:43:14.537000 audit[2387]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.537000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffd862c90 a2=0 a3=1 items=0 ppid=2268 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 9 00:43:14.538000 audit[2388]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.538000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe27e70a0 a2=0 a3=1 items=0 ppid=2268 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 9 00:43:14.540000 audit[2390]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.540000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffdecb940 a2=0 a3=1 items=0 ppid=2268 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 9 00:43:14.542000 audit[2393]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.542000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdfb722d0 a2=0 a3=1 items=0 ppid=2268 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.542000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 9 00:43:14.545000 audit[2396]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.545000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc0115960 a2=0 a3=1 items=0 ppid=2268 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 9 00:43:14.546000 audit[2397]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.546000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffed3a0090 a2=0 a3=1 items=0 ppid=2268 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.546000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 9 00:43:14.548000 audit[2399]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.548000 audit[2399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffda46e670 a2=0 a3=1 items=0 ppid=2268 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 9 00:43:14.551000 audit[2402]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.551000 audit[2402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc5dd5e90 a2=0 a3=1 items=0 ppid=2268 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 9 00:43:14.552000 audit[2403]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.552000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8d6f910 a2=0 a3=1 items=0 ppid=2268 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.552000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 9 00:43:14.554000 audit[2405]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.554000 audit[2405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc9790e70 a2=0 a3=1 items=0 ppid=2268 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.554000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 9 00:43:14.555000 audit[2406]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.555000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0e5e840 a2=0 a3=1 items=0 ppid=2268 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 9 00:43:14.557000 audit[2408]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.557000 audit[2408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd85af090 a2=0 a3=1 items=0 ppid=2268 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 9 00:43:14.560000 audit[2411]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 9 00:43:14.560000 audit[2411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff2e4d6d0 a2=0 a3=1 items=0 ppid=2268 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 9 00:43:14.562000 audit[2413]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 9 00:43:14.562000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffffa9b0e30 a2=0 a3=1 items=0 ppid=2268 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.562000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:14.563000 audit[2413]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 9 00:43:14.563000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffffa9b0e30 a2=0 a3=1 items=0 ppid=2268 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:14.563000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:14.707361 systemd[1]: run-containerd-runc-k8s.io-3e36a70eceb33b7ee2a2160a7c7fb5fff4ede7f3857c7ef045e8a4e9701a1cfc-runc.tr8agv.mount: Deactivated successfully. Sep 9 00:43:14.804933 kubelet[2118]: E0909 00:43:14.803671 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:14.804933 kubelet[2118]: E0909 00:43:14.804369 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:14.871581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2748148397.mount: Deactivated successfully. Sep 9 00:43:15.442606 env[1317]: time="2025-09-09T00:43:15.442559003Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:15.448436 env[1317]: time="2025-09-09T00:43:15.448397328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:15.449625 env[1317]: time="2025-09-09T00:43:15.449598360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:15.451037 env[1317]: time="2025-09-09T00:43:15.451016272Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:15.451766 env[1317]: time="2025-09-09T00:43:15.451737667Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 00:43:15.459121 env[1317]: time="2025-09-09T00:43:15.459088022Z" level=info msg="CreateContainer within sandbox \"3e36a70eceb33b7ee2a2160a7c7fb5fff4ede7f3857c7ef045e8a4e9701a1cfc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:43:15.471050 env[1317]: time="2025-09-09T00:43:15.470958710Z" level=info msg="CreateContainer within sandbox \"3e36a70eceb33b7ee2a2160a7c7fb5fff4ede7f3857c7ef045e8a4e9701a1cfc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c45833ed4f97482f8546855d3b05250f5a524e15bf346c36d32f81dd5148ca1f\"" Sep 9 00:43:15.473675 env[1317]: time="2025-09-09T00:43:15.473646494Z" level=info msg="StartContainer for \"c45833ed4f97482f8546855d3b05250f5a524e15bf346c36d32f81dd5148ca1f\"" Sep 9 00:43:15.531943 env[1317]: time="2025-09-09T00:43:15.531893818Z" level=info msg="StartContainer for \"c45833ed4f97482f8546855d3b05250f5a524e15bf346c36d32f81dd5148ca1f\" returns successfully" Sep 9 00:43:15.705607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3087052570.mount: Deactivated successfully. Sep 9 00:43:15.726522 kubelet[2118]: E0909 00:43:15.726484 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:15.740950 kubelet[2118]: I0909 00:43:15.740899 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8jdlq" podStartSLOduration=2.740885462 podStartE2EDuration="2.740885462s" podCreationTimestamp="2025-09-09 00:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:43:14.817017845 +0000 UTC m=+8.145004577" watchObservedRunningTime="2025-09-09 00:43:15.740885462 +0000 UTC m=+9.068872154" Sep 9 00:43:15.806302 kubelet[2118]: E0909 00:43:15.806264 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:18.065299 kubelet[2118]: E0909 00:43:18.065256 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:18.081628 kubelet[2118]: I0909 00:43:18.081567 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-rcrrz" podStartSLOduration=3.51438621 podStartE2EDuration="5.081552422s" podCreationTimestamp="2025-09-09 00:43:13 +0000 UTC" firstStartedPulling="2025-09-09 00:43:13.88715332 +0000 UTC m=+7.215140052" lastFinishedPulling="2025-09-09 00:43:15.454319532 +0000 UTC m=+8.782306264" observedRunningTime="2025-09-09 00:43:15.826553459 +0000 UTC m=+9.154540151" watchObservedRunningTime="2025-09-09 00:43:18.081552422 +0000 UTC m=+11.409539154" Sep 9 00:43:20.032967 update_engine[1302]: I0909 00:43:20.032547 1302 update_attempter.cc:509] Updating boot flags... Sep 9 00:43:20.817325 sudo[1480]: pam_unix(sudo:session): session closed for user root Sep 9 00:43:20.822027 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 9 00:43:20.822123 kernel: audit: type=1106 audit(1757378600.816:273): pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:43:20.822158 kernel: audit: type=1104 audit(1757378600.816:274): pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:43:20.816000 audit[1480]: USER_END pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:43:20.816000 audit[1480]: CRED_DISP pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 9 00:43:20.824217 sshd[1474]: pam_unix(sshd:session): session closed for user core Sep 9 00:43:20.824000 audit[1474]: USER_END pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:20.827523 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:43692.service: Deactivated successfully. Sep 9 00:43:20.828304 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:43:20.825000 audit[1474]: CRED_DISP pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:20.831730 kernel: audit: type=1106 audit(1757378600.824:275): pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:20.831815 kernel: audit: type=1104 audit(1757378600.825:276): pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:20.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.119:22-10.0.0.1:43692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:20.834735 systemd-logind[1299]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:43:20.835019 kernel: audit: type=1131 audit(1757378600.826:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.119:22-10.0.0.1:43692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:20.835495 systemd-logind[1299]: Removed session 7. Sep 9 00:43:21.730000 audit[2519]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:21.733993 kernel: audit: type=1325 audit(1757378601.730:278): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:21.730000 audit[2519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffeabe94d0 a2=0 a3=1 items=0 ppid=2268 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:21.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:21.739832 kernel: audit: type=1300 audit(1757378601.730:278): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffeabe94d0 a2=0 a3=1 items=0 ppid=2268 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:21.739910 kernel: audit: type=1327 audit(1757378601.730:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:21.742000 audit[2519]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:21.742000 audit[2519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeabe94d0 a2=0 a3=1 items=0 ppid=2268 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:21.749217 kernel: audit: type=1325 audit(1757378601.742:279): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:21.749292 kernel: audit: type=1300 audit(1757378601.742:279): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeabe94d0 a2=0 a3=1 items=0 ppid=2268 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:21.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:21.835000 audit[2521]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:21.835000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcf2411f0 a2=0 a3=1 items=0 ppid=2268 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:21.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:21.842000 audit[2521]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:21.842000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf2411f0 a2=0 a3=1 items=0 ppid=2268 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:21.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:25.165000 audit[2524]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:25.165000 audit[2524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc9792e20 a2=0 a3=1 items=0 ppid=2268 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:25.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:25.171000 audit[2524]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:25.171000 audit[2524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc9792e20 a2=0 a3=1 items=0 ppid=2268 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:25.171000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:25.221000 audit[2526]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:25.221000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff9d7dbb0 a2=0 a3=1 items=0 ppid=2268 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:25.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:25.228000 audit[2526]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:25.228000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff9d7dbb0 a2=0 a3=1 items=0 ppid=2268 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:25.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:25.275938 kubelet[2118]: I0909 00:43:25.275902 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b406cd83-14bb-451e-ad7b-1875f65c9691-typha-certs\") pod \"calico-typha-5b9d547c5f-jlh8n\" (UID: \"b406cd83-14bb-451e-ad7b-1875f65c9691\") " pod="calico-system/calico-typha-5b9d547c5f-jlh8n" Sep 9 00:43:25.276455 kubelet[2118]: I0909 00:43:25.276437 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxj6p\" (UniqueName: \"kubernetes.io/projected/b406cd83-14bb-451e-ad7b-1875f65c9691-kube-api-access-jxj6p\") pod \"calico-typha-5b9d547c5f-jlh8n\" (UID: \"b406cd83-14bb-451e-ad7b-1875f65c9691\") " pod="calico-system/calico-typha-5b9d547c5f-jlh8n" Sep 9 00:43:25.276587 kubelet[2118]: I0909 00:43:25.276573 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b406cd83-14bb-451e-ad7b-1875f65c9691-tigera-ca-bundle\") pod \"calico-typha-5b9d547c5f-jlh8n\" (UID: \"b406cd83-14bb-451e-ad7b-1875f65c9691\") " pod="calico-system/calico-typha-5b9d547c5f-jlh8n" Sep 9 00:43:25.478279 kubelet[2118]: I0909 00:43:25.478024 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05f6e245-489d-4791-b163-b99cadbc6c4b-cni-log-dir\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478279 kubelet[2118]: I0909 00:43:25.478122 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05f6e245-489d-4791-b163-b99cadbc6c4b-cni-net-dir\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478279 kubelet[2118]: I0909 00:43:25.478177 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05f6e245-489d-4791-b163-b99cadbc6c4b-node-certs\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478279 kubelet[2118]: I0909 00:43:25.478244 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05f6e245-489d-4791-b163-b99cadbc6c4b-var-run-calico\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478279 kubelet[2118]: I0909 00:43:25.478267 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05f6e245-489d-4791-b163-b99cadbc6c4b-policysync\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478496 kubelet[2118]: I0909 00:43:25.478316 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05f6e245-489d-4791-b163-b99cadbc6c4b-tigera-ca-bundle\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478496 kubelet[2118]: I0909 00:43:25.478335 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05f6e245-489d-4791-b163-b99cadbc6c4b-xtables-lock\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478496 kubelet[2118]: I0909 00:43:25.478402 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05f6e245-489d-4791-b163-b99cadbc6c4b-cni-bin-dir\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478496 kubelet[2118]: I0909 00:43:25.478422 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05f6e245-489d-4791-b163-b99cadbc6c4b-flexvol-driver-host\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478496 kubelet[2118]: I0909 00:43:25.478465 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f6e245-489d-4791-b163-b99cadbc6c4b-lib-modules\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478615 kubelet[2118]: I0909 00:43:25.478486 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlpgt\" (UniqueName: \"kubernetes.io/projected/05f6e245-489d-4791-b163-b99cadbc6c4b-kube-api-access-zlpgt\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.478615 kubelet[2118]: I0909 00:43:25.478541 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05f6e245-489d-4791-b163-b99cadbc6c4b-var-lib-calico\") pod \"calico-node-g4zw6\" (UID: \"05f6e245-489d-4791-b163-b99cadbc6c4b\") " pod="calico-system/calico-node-g4zw6" Sep 9 00:43:25.502410 kubelet[2118]: E0909 00:43:25.502367 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:25.502883 env[1317]: time="2025-09-09T00:43:25.502846887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9d547c5f-jlh8n,Uid:b406cd83-14bb-451e-ad7b-1875f65c9691,Namespace:calico-system,Attempt:0,}" Sep 9 00:43:25.518229 env[1317]: time="2025-09-09T00:43:25.518161191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:25.518326 env[1317]: time="2025-09-09T00:43:25.518246750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:25.518326 env[1317]: time="2025-09-09T00:43:25.518278030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:25.518459 env[1317]: time="2025-09-09T00:43:25.518431590Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3ed4159c2facf2d9b508451d5bae01d1b7b0b7d8aae4ccfc260b11cb13064bd pid=2537 runtime=io.containerd.runc.v2 Sep 9 00:43:25.566026 env[1317]: time="2025-09-09T00:43:25.565988535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9d547c5f-jlh8n,Uid:b406cd83-14bb-451e-ad7b-1875f65c9691,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3ed4159c2facf2d9b508451d5bae01d1b7b0b7d8aae4ccfc260b11cb13064bd\"" Sep 9 00:43:25.567000 kubelet[2118]: E0909 00:43:25.566819 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:25.567755 env[1317]: time="2025-09-09T00:43:25.567728608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:43:25.584083 kubelet[2118]: E0909 00:43:25.584055 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.584186 kubelet[2118]: W0909 00:43:25.584171 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.584291 kubelet[2118]: E0909 00:43:25.584277 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.590750 kubelet[2118]: E0909 00:43:25.590684 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.590865 kubelet[2118]: W0909 00:43:25.590848 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.590943 kubelet[2118]: E0909 00:43:25.590930 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.658849 kubelet[2118]: E0909 00:43:25.658765 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b44f5" podUID="5fcbd175-b1d0-445a-87d8-30edc58c5294" Sep 9 00:43:25.665925 kubelet[2118]: E0909 00:43:25.665882 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.665925 kubelet[2118]: W0909 00:43:25.665905 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.665925 kubelet[2118]: E0909 00:43:25.665923 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.668359 kubelet[2118]: E0909 00:43:25.668337 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.668359 kubelet[2118]: W0909 00:43:25.668354 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.668468 kubelet[2118]: E0909 00:43:25.668368 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.671046 kubelet[2118]: E0909 00:43:25.670993 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.671046 kubelet[2118]: W0909 00:43:25.671009 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.671046 kubelet[2118]: E0909 00:43:25.671024 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.671258 kubelet[2118]: E0909 00:43:25.671240 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.671258 kubelet[2118]: W0909 00:43:25.671249 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.671258 kubelet[2118]: E0909 00:43:25.671258 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.671464 kubelet[2118]: E0909 00:43:25.671448 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.671464 kubelet[2118]: W0909 00:43:25.671459 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.671545 kubelet[2118]: E0909 00:43:25.671469 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.671635 kubelet[2118]: E0909 00:43:25.671611 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.671635 kubelet[2118]: W0909 00:43:25.671621 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.671635 kubelet[2118]: E0909 00:43:25.671628 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.672425 kubelet[2118]: E0909 00:43:25.672408 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.672425 kubelet[2118]: W0909 00:43:25.672422 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.672522 kubelet[2118]: E0909 00:43:25.672433 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.672611 kubelet[2118]: E0909 00:43:25.672599 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.672611 kubelet[2118]: W0909 00:43:25.672609 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.672676 kubelet[2118]: E0909 00:43:25.672617 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.672764 kubelet[2118]: E0909 00:43:25.672752 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.672764 kubelet[2118]: W0909 00:43:25.672762 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.672842 kubelet[2118]: E0909 00:43:25.672769 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.672897 kubelet[2118]: E0909 00:43:25.672886 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.672897 kubelet[2118]: W0909 00:43:25.672895 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.672951 kubelet[2118]: E0909 00:43:25.672902 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.673034 kubelet[2118]: E0909 00:43:25.673024 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.673034 kubelet[2118]: W0909 00:43:25.673033 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.673106 kubelet[2118]: E0909 00:43:25.673040 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.673165 kubelet[2118]: E0909 00:43:25.673156 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.673207 kubelet[2118]: W0909 00:43:25.673165 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.673207 kubelet[2118]: E0909 00:43:25.673172 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.673318 kubelet[2118]: E0909 00:43:25.673309 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.673318 kubelet[2118]: W0909 00:43:25.673318 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.673386 kubelet[2118]: E0909 00:43:25.673327 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.673449 kubelet[2118]: E0909 00:43:25.673440 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.673449 kubelet[2118]: W0909 00:43:25.673448 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.673506 kubelet[2118]: E0909 00:43:25.673455 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.673573 kubelet[2118]: E0909 00:43:25.673564 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.673573 kubelet[2118]: W0909 00:43:25.673573 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.673638 kubelet[2118]: E0909 00:43:25.673579 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.673705 kubelet[2118]: E0909 00:43:25.673696 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.673733 kubelet[2118]: W0909 00:43:25.673705 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.673733 kubelet[2118]: E0909 00:43:25.673712 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.673847 kubelet[2118]: E0909 00:43:25.673838 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.673847 kubelet[2118]: W0909 00:43:25.673847 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.673904 kubelet[2118]: E0909 00:43:25.673854 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.674009 kubelet[2118]: E0909 00:43:25.673968 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.674009 kubelet[2118]: W0909 00:43:25.673985 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.674009 kubelet[2118]: E0909 00:43:25.673993 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.674124 kubelet[2118]: E0909 00:43:25.674114 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.674124 kubelet[2118]: W0909 00:43:25.674120 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.674182 kubelet[2118]: E0909 00:43:25.674126 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.674257 kubelet[2118]: E0909 00:43:25.674248 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.674257 kubelet[2118]: W0909 00:43:25.674257 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.674309 kubelet[2118]: E0909 00:43:25.674264 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.680643 kubelet[2118]: E0909 00:43:25.680626 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.680736 kubelet[2118]: W0909 00:43:25.680722 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.680802 kubelet[2118]: E0909 00:43:25.680789 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.680870 kubelet[2118]: I0909 00:43:25.680859 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5fcbd175-b1d0-445a-87d8-30edc58c5294-socket-dir\") pod \"csi-node-driver-b44f5\" (UID: \"5fcbd175-b1d0-445a-87d8-30edc58c5294\") " pod="calico-system/csi-node-driver-b44f5" Sep 9 00:43:25.681137 kubelet[2118]: E0909 00:43:25.681122 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.681228 kubelet[2118]: W0909 00:43:25.681213 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.681303 kubelet[2118]: E0909 00:43:25.681292 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.681365 kubelet[2118]: I0909 00:43:25.681354 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5fcbd175-b1d0-445a-87d8-30edc58c5294-registration-dir\") pod \"csi-node-driver-b44f5\" (UID: \"5fcbd175-b1d0-445a-87d8-30edc58c5294\") " pod="calico-system/csi-node-driver-b44f5" Sep 9 00:43:25.681622 kubelet[2118]: E0909 00:43:25.681607 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.681707 kubelet[2118]: W0909 00:43:25.681695 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.681808 kubelet[2118]: E0909 00:43:25.681792 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.681880 kubelet[2118]: I0909 00:43:25.681866 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5fcbd175-b1d0-445a-87d8-30edc58c5294-varrun\") pod \"csi-node-driver-b44f5\" (UID: \"5fcbd175-b1d0-445a-87d8-30edc58c5294\") " pod="calico-system/csi-node-driver-b44f5" Sep 9 00:43:25.682291 kubelet[2118]: E0909 00:43:25.682256 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.682384 kubelet[2118]: W0909 00:43:25.682370 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.682574 kubelet[2118]: E0909 00:43:25.682550 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.682627 kubelet[2118]: I0909 00:43:25.682584 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fcbd175-b1d0-445a-87d8-30edc58c5294-kubelet-dir\") pod \"csi-node-driver-b44f5\" (UID: \"5fcbd175-b1d0-445a-87d8-30edc58c5294\") " pod="calico-system/csi-node-driver-b44f5" Sep 9 00:43:25.682842 kubelet[2118]: E0909 00:43:25.682711 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.682925 kubelet[2118]: W0909 00:43:25.682909 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.683050 kubelet[2118]: E0909 00:43:25.683030 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.683254 kubelet[2118]: E0909 00:43:25.683233 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.683334 kubelet[2118]: W0909 00:43:25.683314 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.683443 kubelet[2118]: E0909 00:43:25.683427 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.683655 kubelet[2118]: E0909 00:43:25.683640 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.683789 kubelet[2118]: W0909 00:43:25.683775 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.683918 kubelet[2118]: E0909 00:43:25.683894 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.684152 kubelet[2118]: E0909 00:43:25.684137 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.684245 kubelet[2118]: W0909 00:43:25.684231 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.684419 kubelet[2118]: E0909 00:43:25.684405 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.684533 kubelet[2118]: I0909 00:43:25.684519 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j2jf\" (UniqueName: \"kubernetes.io/projected/5fcbd175-b1d0-445a-87d8-30edc58c5294-kube-api-access-4j2jf\") pod \"csi-node-driver-b44f5\" (UID: \"5fcbd175-b1d0-445a-87d8-30edc58c5294\") " pod="calico-system/csi-node-driver-b44f5" Sep 9 00:43:25.684968 kubelet[2118]: E0909 00:43:25.684952 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.685528 kubelet[2118]: W0909 00:43:25.685508 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.685719 kubelet[2118]: E0909 00:43:25.685703 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.685835 kubelet[2118]: E0909 00:43:25.685824 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.685898 kubelet[2118]: W0909 00:43:25.685886 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.685956 kubelet[2118]: E0909 00:43:25.685944 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.686264 kubelet[2118]: E0909 00:43:25.686251 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.686342 kubelet[2118]: W0909 00:43:25.686329 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.686399 kubelet[2118]: E0909 00:43:25.686387 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.686620 kubelet[2118]: E0909 00:43:25.686601 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.686694 kubelet[2118]: W0909 00:43:25.686681 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.686754 kubelet[2118]: E0909 00:43:25.686742 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.686998 kubelet[2118]: E0909 00:43:25.686973 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.687126 kubelet[2118]: W0909 00:43:25.687071 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.687211 kubelet[2118]: E0909 00:43:25.687178 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.687966 kubelet[2118]: E0909 00:43:25.687951 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.688202 kubelet[2118]: W0909 00:43:25.688177 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.688277 kubelet[2118]: E0909 00:43:25.688264 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.688841 kubelet[2118]: E0909 00:43:25.688824 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.688956 kubelet[2118]: W0909 00:43:25.688943 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.689036 kubelet[2118]: E0909 00:43:25.689024 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.727852 env[1317]: time="2025-09-09T00:43:25.727491221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4zw6,Uid:05f6e245-489d-4791-b163-b99cadbc6c4b,Namespace:calico-system,Attempt:0,}" Sep 9 00:43:25.746021 env[1317]: time="2025-09-09T00:43:25.743808441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:25.746021 env[1317]: time="2025-09-09T00:43:25.743855960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:25.746021 env[1317]: time="2025-09-09T00:43:25.743877360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:25.746021 env[1317]: time="2025-09-09T00:43:25.744048720Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ae886f1bc6b5584c12074a266976f1e0fc786fdd64eea56b8bb169b7e1febca pid=2628 runtime=io.containerd.runc.v2 Sep 9 00:43:25.779643 env[1317]: time="2025-09-09T00:43:25.779150831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4zw6,Uid:05f6e245-489d-4791-b163-b99cadbc6c4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ae886f1bc6b5584c12074a266976f1e0fc786fdd64eea56b8bb169b7e1febca\"" Sep 9 00:43:25.790376 kubelet[2118]: E0909 00:43:25.790352 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.790376 kubelet[2118]: W0909 00:43:25.790372 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.790527 kubelet[2118]: E0909 00:43:25.790391 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.790561 kubelet[2118]: E0909 00:43:25.790547 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.790561 kubelet[2118]: W0909 00:43:25.790555 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.790605 kubelet[2118]: E0909 00:43:25.790568 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.790771 kubelet[2118]: E0909 00:43:25.790755 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.790805 kubelet[2118]: W0909 00:43:25.790772 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.790805 kubelet[2118]: E0909 00:43:25.790790 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.790983 kubelet[2118]: E0909 00:43:25.790966 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.791023 kubelet[2118]: W0909 00:43:25.791011 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.791054 kubelet[2118]: E0909 00:43:25.791029 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.791242 kubelet[2118]: E0909 00:43:25.791228 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.791242 kubelet[2118]: W0909 00:43:25.791241 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.791303 kubelet[2118]: E0909 00:43:25.791256 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.791463 kubelet[2118]: E0909 00:43:25.791443 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.791463 kubelet[2118]: W0909 00:43:25.791461 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.791528 kubelet[2118]: E0909 00:43:25.791476 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.791630 kubelet[2118]: E0909 00:43:25.791618 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.791668 kubelet[2118]: W0909 00:43:25.791633 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.791668 kubelet[2118]: E0909 00:43:25.791646 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.791801 kubelet[2118]: E0909 00:43:25.791773 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.791801 kubelet[2118]: W0909 00:43:25.791790 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.791874 kubelet[2118]: E0909 00:43:25.791855 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.791950 kubelet[2118]: E0909 00:43:25.791939 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.791950 kubelet[2118]: W0909 00:43:25.791950 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.792102 kubelet[2118]: E0909 00:43:25.792038 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.792264 kubelet[2118]: E0909 00:43:25.792173 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.792264 kubelet[2118]: W0909 00:43:25.792215 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.792264 kubelet[2118]: E0909 00:43:25.792230 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.792437 kubelet[2118]: E0909 00:43:25.792421 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.792437 kubelet[2118]: W0909 00:43:25.792433 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.792485 kubelet[2118]: E0909 00:43:25.792445 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.792617 kubelet[2118]: E0909 00:43:25.792605 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.792617 kubelet[2118]: W0909 00:43:25.792616 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.792665 kubelet[2118]: E0909 00:43:25.792627 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.792915 kubelet[2118]: E0909 00:43:25.792901 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.792950 kubelet[2118]: W0909 00:43:25.792916 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.792950 kubelet[2118]: E0909 00:43:25.792931 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.793308 kubelet[2118]: E0909 00:43:25.793270 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.793308 kubelet[2118]: W0909 00:43:25.793288 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.793376 kubelet[2118]: E0909 00:43:25.793344 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.793794 kubelet[2118]: E0909 00:43:25.793778 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.793794 kubelet[2118]: W0909 00:43:25.793793 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.793941 kubelet[2118]: E0909 00:43:25.793852 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.794007 kubelet[2118]: E0909 00:43:25.793958 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.794007 kubelet[2118]: W0909 00:43:25.793966 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.794066 kubelet[2118]: E0909 00:43:25.794015 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.794150 kubelet[2118]: E0909 00:43:25.794137 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.794150 kubelet[2118]: W0909 00:43:25.794150 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.794268 kubelet[2118]: E0909 00:43:25.794239 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.794318 kubelet[2118]: E0909 00:43:25.794294 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.794318 kubelet[2118]: W0909 00:43:25.794301 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.794318 kubelet[2118]: E0909 00:43:25.794316 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.794530 kubelet[2118]: E0909 00:43:25.794440 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.794530 kubelet[2118]: W0909 00:43:25.794450 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.794530 kubelet[2118]: E0909 00:43:25.794458 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.794774 kubelet[2118]: E0909 00:43:25.794672 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.794774 kubelet[2118]: W0909 00:43:25.794686 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.794774 kubelet[2118]: E0909 00:43:25.794703 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.795058 kubelet[2118]: E0909 00:43:25.794913 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.795058 kubelet[2118]: W0909 00:43:25.794925 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.795058 kubelet[2118]: E0909 00:43:25.794941 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.795447 kubelet[2118]: E0909 00:43:25.795215 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.795447 kubelet[2118]: W0909 00:43:25.795228 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.795447 kubelet[2118]: E0909 00:43:25.795245 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.795576 kubelet[2118]: E0909 00:43:25.795564 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.795623 kubelet[2118]: W0909 00:43:25.795577 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.795623 kubelet[2118]: E0909 00:43:25.795594 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.795799 kubelet[2118]: E0909 00:43:25.795787 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.795799 kubelet[2118]: W0909 00:43:25.795798 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.795874 kubelet[2118]: E0909 00:43:25.795855 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.795964 kubelet[2118]: E0909 00:43:25.795953 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.795964 kubelet[2118]: W0909 00:43:25.795963 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.796064 kubelet[2118]: E0909 00:43:25.795973 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:25.805485 kubelet[2118]: E0909 00:43:25.805424 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:25.805485 kubelet[2118]: W0909 00:43:25.805441 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:25.805485 kubelet[2118]: E0909 00:43:25.805455 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:26.242000 audit[2688]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:26.252221 kernel: kauditd_printk_skb: 19 callbacks suppressed Sep 9 00:43:26.252293 kernel: audit: type=1325 audit(1757378606.242:286): table=filter:97 family=2 entries=20 op=nft_register_rule pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:26.252332 kernel: audit: type=1300 audit(1757378606.242:286): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdb1634f0 a2=0 a3=1 items=0 ppid=2268 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:26.242000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdb1634f0 a2=0 a3=1 items=0 ppid=2268 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:26.259169 kernel: audit: type=1327 audit(1757378606.242:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:26.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:26.262000 audit[2688]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:26.262000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb1634f0 a2=0 a3=1 items=0 ppid=2268 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:26.270433 kernel: audit: type=1325 audit(1757378606.262:287): table=nat:98 family=2 entries=12 op=nft_register_rule pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:26.270486 kernel: audit: type=1300 audit(1757378606.262:287): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb1634f0 a2=0 a3=1 items=0 ppid=2268 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:26.272011 kernel: audit: type=1327 audit(1757378606.262:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:26.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:26.391973 systemd[1]: run-containerd-runc-k8s.io-a3ed4159c2facf2d9b508451d5bae01d1b7b0b7d8aae4ccfc260b11cb13064bd-runc.2vEpeV.mount: Deactivated successfully. Sep 9 00:43:26.474915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396120888.mount: Deactivated successfully. Sep 9 00:43:27.138474 env[1317]: time="2025-09-09T00:43:27.138431883Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:27.140660 env[1317]: time="2025-09-09T00:43:27.140624435Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:27.142133 env[1317]: time="2025-09-09T00:43:27.142085310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:27.144026 env[1317]: time="2025-09-09T00:43:27.143953784Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:27.144591 env[1317]: time="2025-09-09T00:43:27.144566462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 00:43:27.148095 env[1317]: time="2025-09-09T00:43:27.147310293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:43:27.158899 env[1317]: time="2025-09-09T00:43:27.158456495Z" level=info msg="CreateContainer within sandbox \"a3ed4159c2facf2d9b508451d5bae01d1b7b0b7d8aae4ccfc260b11cb13064bd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:43:27.169036 env[1317]: time="2025-09-09T00:43:27.168985980Z" level=info msg="CreateContainer within sandbox \"a3ed4159c2facf2d9b508451d5bae01d1b7b0b7d8aae4ccfc260b11cb13064bd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d02f51e88d224f7e552cc4a100d8a7cb9b4b22151e15ac04d4dc9c1cfb036880\"" Sep 9 00:43:27.170389 env[1317]: time="2025-09-09T00:43:27.169496258Z" level=info msg="StartContainer for \"d02f51e88d224f7e552cc4a100d8a7cb9b4b22151e15ac04d4dc9c1cfb036880\"" Sep 9 00:43:27.250949 env[1317]: time="2025-09-09T00:43:27.250898225Z" level=info msg="StartContainer for \"d02f51e88d224f7e552cc4a100d8a7cb9b4b22151e15ac04d4dc9c1cfb036880\" returns successfully" Sep 9 00:43:27.780542 kubelet[2118]: E0909 00:43:27.780484 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b44f5" podUID="5fcbd175-b1d0-445a-87d8-30edc58c5294" Sep 9 00:43:27.826952 kubelet[2118]: E0909 00:43:27.826923 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:27.890373 kubelet[2118]: E0909 00:43:27.890341 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.890598 kubelet[2118]: W0909 00:43:27.890524 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.890598 kubelet[2118]: E0909 00:43:27.890550 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.890885 kubelet[2118]: E0909 00:43:27.890872 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.890989 kubelet[2118]: W0909 00:43:27.890963 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.891066 kubelet[2118]: E0909 00:43:27.891049 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.891357 kubelet[2118]: E0909 00:43:27.891344 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.891445 kubelet[2118]: W0909 00:43:27.891433 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.891527 kubelet[2118]: E0909 00:43:27.891512 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.891771 kubelet[2118]: E0909 00:43:27.891758 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.891868 kubelet[2118]: W0909 00:43:27.891850 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.891940 kubelet[2118]: E0909 00:43:27.891929 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.892519 kubelet[2118]: E0909 00:43:27.892505 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.892613 kubelet[2118]: W0909 00:43:27.892601 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.892693 kubelet[2118]: E0909 00:43:27.892683 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.893394 kubelet[2118]: E0909 00:43:27.893380 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.893726 kubelet[2118]: W0909 00:43:27.893709 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.893817 kubelet[2118]: E0909 00:43:27.893805 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.894046 kubelet[2118]: E0909 00:43:27.894034 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.894125 kubelet[2118]: W0909 00:43:27.894112 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.894203 kubelet[2118]: E0909 00:43:27.894189 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.894486 kubelet[2118]: E0909 00:43:27.894474 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.894566 kubelet[2118]: W0909 00:43:27.894553 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.894657 kubelet[2118]: E0909 00:43:27.894645 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.894907 kubelet[2118]: E0909 00:43:27.894895 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.894991 kubelet[2118]: W0909 00:43:27.894968 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.895063 kubelet[2118]: E0909 00:43:27.895050 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.895295 kubelet[2118]: E0909 00:43:27.895282 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.895370 kubelet[2118]: W0909 00:43:27.895358 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.895437 kubelet[2118]: E0909 00:43:27.895425 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.895633 kubelet[2118]: E0909 00:43:27.895622 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.895710 kubelet[2118]: W0909 00:43:27.895698 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.895766 kubelet[2118]: E0909 00:43:27.895754 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.896004 kubelet[2118]: E0909 00:43:27.895992 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.896195 kubelet[2118]: W0909 00:43:27.896140 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.896315 kubelet[2118]: E0909 00:43:27.896298 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.896701 kubelet[2118]: E0909 00:43:27.896686 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.896868 kubelet[2118]: W0909 00:43:27.896850 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.896939 kubelet[2118]: E0909 00:43:27.896927 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.897227 kubelet[2118]: E0909 00:43:27.897213 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.897313 kubelet[2118]: W0909 00:43:27.897300 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.897370 kubelet[2118]: E0909 00:43:27.897360 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.897582 kubelet[2118]: E0909 00:43:27.897571 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.897657 kubelet[2118]: W0909 00:43:27.897644 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.897715 kubelet[2118]: E0909 00:43:27.897703 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.908954 kubelet[2118]: E0909 00:43:27.908937 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.908954 kubelet[2118]: W0909 00:43:27.908953 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.909083 kubelet[2118]: E0909 00:43:27.908966 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.909167 kubelet[2118]: E0909 00:43:27.909154 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.909167 kubelet[2118]: W0909 00:43:27.909166 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.909237 kubelet[2118]: E0909 00:43:27.909183 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.909356 kubelet[2118]: E0909 00:43:27.909344 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.909356 kubelet[2118]: W0909 00:43:27.909355 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.909418 kubelet[2118]: E0909 00:43:27.909367 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.909542 kubelet[2118]: E0909 00:43:27.909532 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.909602 kubelet[2118]: W0909 00:43:27.909542 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.909602 kubelet[2118]: E0909 00:43:27.909553 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.909722 kubelet[2118]: E0909 00:43:27.909695 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.909722 kubelet[2118]: W0909 00:43:27.909705 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.909722 kubelet[2118]: E0909 00:43:27.909713 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.909840 kubelet[2118]: E0909 00:43:27.909828 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.909840 kubelet[2118]: W0909 00:43:27.909838 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.909900 kubelet[2118]: E0909 00:43:27.909849 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.911298 kubelet[2118]: E0909 00:43:27.911273 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.911298 kubelet[2118]: W0909 00:43:27.911291 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.911389 kubelet[2118]: E0909 00:43:27.911306 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.911761 kubelet[2118]: E0909 00:43:27.911733 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.911761 kubelet[2118]: W0909 00:43:27.911748 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.911761 kubelet[2118]: E0909 00:43:27.911758 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.912032 kubelet[2118]: E0909 00:43:27.912020 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.912032 kubelet[2118]: W0909 00:43:27.912031 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.912107 kubelet[2118]: E0909 00:43:27.912043 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.912843 kubelet[2118]: E0909 00:43:27.912831 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.912843 kubelet[2118]: W0909 00:43:27.912842 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.912945 kubelet[2118]: E0909 00:43:27.912927 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.913116 kubelet[2118]: E0909 00:43:27.913096 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.913116 kubelet[2118]: W0909 00:43:27.913107 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.913257 kubelet[2118]: E0909 00:43:27.913122 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.913318 kubelet[2118]: E0909 00:43:27.913299 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.913318 kubelet[2118]: W0909 00:43:27.913312 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.913430 kubelet[2118]: E0909 00:43:27.913322 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.913506 kubelet[2118]: E0909 00:43:27.913490 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.913506 kubelet[2118]: W0909 00:43:27.913502 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.913580 kubelet[2118]: E0909 00:43:27.913513 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.913716 kubelet[2118]: E0909 00:43:27.913705 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.913754 kubelet[2118]: W0909 00:43:27.913717 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.913754 kubelet[2118]: E0909 00:43:27.913726 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.914015 kubelet[2118]: E0909 00:43:27.913994 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.914015 kubelet[2118]: W0909 00:43:27.914014 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.914097 kubelet[2118]: E0909 00:43:27.914026 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.914183 kubelet[2118]: E0909 00:43:27.914166 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.914219 kubelet[2118]: W0909 00:43:27.914184 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.914219 kubelet[2118]: E0909 00:43:27.914194 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.914386 kubelet[2118]: E0909 00:43:27.914376 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.914422 kubelet[2118]: W0909 00:43:27.914386 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.914445 kubelet[2118]: E0909 00:43:27.914436 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:27.914767 kubelet[2118]: E0909 00:43:27.914755 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:43:27.914767 kubelet[2118]: W0909 00:43:27.914767 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:43:27.914857 kubelet[2118]: E0909 00:43:27.914777 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:43:28.048701 env[1317]: time="2025-09-09T00:43:28.048578077Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:28.053729 env[1317]: time="2025-09-09T00:43:28.053675100Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:28.055058 env[1317]: time="2025-09-09T00:43:28.055027656Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:28.056514 env[1317]: time="2025-09-09T00:43:28.056476971Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:28.056901 env[1317]: time="2025-09-09T00:43:28.056865370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 00:43:28.059671 env[1317]: time="2025-09-09T00:43:28.059192803Z" level=info msg="CreateContainer within sandbox \"4ae886f1bc6b5584c12074a266976f1e0fc786fdd64eea56b8bb169b7e1febca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:43:28.069811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688777614.mount: Deactivated successfully. Sep 9 00:43:28.073816 env[1317]: time="2025-09-09T00:43:28.073759516Z" level=info msg="CreateContainer within sandbox \"4ae886f1bc6b5584c12074a266976f1e0fc786fdd64eea56b8bb169b7e1febca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bda875f148735e4b500da46fd72c34b0ee27b296c5eb6a10c1f544572c257501\"" Sep 9 00:43:28.074161 env[1317]: time="2025-09-09T00:43:28.074140395Z" level=info msg="StartContainer for \"bda875f148735e4b500da46fd72c34b0ee27b296c5eb6a10c1f544572c257501\"" Sep 9 00:43:28.136858 env[1317]: time="2025-09-09T00:43:28.136199236Z" level=info msg="StartContainer for \"bda875f148735e4b500da46fd72c34b0ee27b296c5eb6a10c1f544572c257501\" returns successfully" Sep 9 00:43:28.169314 env[1317]: time="2025-09-09T00:43:28.169257610Z" level=info msg="shim disconnected" id=bda875f148735e4b500da46fd72c34b0ee27b296c5eb6a10c1f544572c257501 Sep 9 00:43:28.169314 env[1317]: time="2025-09-09T00:43:28.169303289Z" level=warning msg="cleaning up after shim disconnected" id=bda875f148735e4b500da46fd72c34b0ee27b296c5eb6a10c1f544572c257501 namespace=k8s.io Sep 9 00:43:28.169314 env[1317]: time="2025-09-09T00:43:28.169312209Z" level=info msg="cleaning up dead shim" Sep 9 00:43:28.176347 env[1317]: time="2025-09-09T00:43:28.176303387Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:43:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2814 runtime=io.containerd.runc.v2\n" Sep 9 00:43:28.382403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bda875f148735e4b500da46fd72c34b0ee27b296c5eb6a10c1f544572c257501-rootfs.mount: Deactivated successfully. Sep 9 00:43:28.829667 kubelet[2118]: I0909 00:43:28.829633 2118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:43:28.830813 kubelet[2118]: E0909 00:43:28.830783 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:28.832261 env[1317]: time="2025-09-09T00:43:28.832227443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:43:28.850022 kubelet[2118]: I0909 00:43:28.849960 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b9d547c5f-jlh8n" podStartSLOduration=2.27090566 podStartE2EDuration="3.849947307s" podCreationTimestamp="2025-09-09 00:43:25 +0000 UTC" firstStartedPulling="2025-09-09 00:43:25.567410089 +0000 UTC m=+18.895396821" lastFinishedPulling="2025-09-09 00:43:27.146451736 +0000 UTC m=+20.474438468" observedRunningTime="2025-09-09 00:43:27.841763403 +0000 UTC m=+21.169750095" watchObservedRunningTime="2025-09-09 00:43:28.849947307 +0000 UTC m=+22.177934039" Sep 9 00:43:29.780646 kubelet[2118]: E0909 00:43:29.780584 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b44f5" podUID="5fcbd175-b1d0-445a-87d8-30edc58c5294" Sep 9 00:43:31.241816 env[1317]: time="2025-09-09T00:43:31.241770854Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:31.243747 env[1317]: time="2025-09-09T00:43:31.243710328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:31.245105 env[1317]: time="2025-09-09T00:43:31.245073724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:31.247030 env[1317]: time="2025-09-09T00:43:31.247004559Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:31.247601 env[1317]: time="2025-09-09T00:43:31.247574277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 00:43:31.252890 env[1317]: time="2025-09-09T00:43:31.252859182Z" level=info msg="CreateContainer within sandbox \"4ae886f1bc6b5584c12074a266976f1e0fc786fdd64eea56b8bb169b7e1febca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:43:31.267542 env[1317]: time="2025-09-09T00:43:31.267479821Z" level=info msg="CreateContainer within sandbox \"4ae886f1bc6b5584c12074a266976f1e0fc786fdd64eea56b8bb169b7e1febca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d3bb074e2fe5375381245fca9da771c1a5d9d627802ae97ac7b3b5b95f3b5b56\"" Sep 9 00:43:31.268229 env[1317]: time="2025-09-09T00:43:31.268055340Z" level=info msg="StartContainer for \"d3bb074e2fe5375381245fca9da771c1a5d9d627802ae97ac7b3b5b95f3b5b56\"" Sep 9 00:43:31.325050 env[1317]: time="2025-09-09T00:43:31.325007739Z" level=info msg="StartContainer for \"d3bb074e2fe5375381245fca9da771c1a5d9d627802ae97ac7b3b5b95f3b5b56\" returns successfully" Sep 9 00:43:31.780415 kubelet[2118]: E0909 00:43:31.780350 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b44f5" podUID="5fcbd175-b1d0-445a-87d8-30edc58c5294" Sep 9 00:43:32.106332 env[1317]: time="2025-09-09T00:43:32.106276869Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:43:32.122762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3bb074e2fe5375381245fca9da771c1a5d9d627802ae97ac7b3b5b95f3b5b56-rootfs.mount: Deactivated successfully. Sep 9 00:43:32.128546 env[1317]: time="2025-09-09T00:43:32.128501408Z" level=info msg="shim disconnected" id=d3bb074e2fe5375381245fca9da771c1a5d9d627802ae97ac7b3b5b95f3b5b56 Sep 9 00:43:32.128546 env[1317]: time="2025-09-09T00:43:32.128548808Z" level=warning msg="cleaning up after shim disconnected" id=d3bb074e2fe5375381245fca9da771c1a5d9d627802ae97ac7b3b5b95f3b5b56 namespace=k8s.io Sep 9 00:43:32.128748 env[1317]: time="2025-09-09T00:43:32.128558888Z" level=info msg="cleaning up dead shim" Sep 9 00:43:32.130243 kubelet[2118]: I0909 00:43:32.130217 2118 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 00:43:32.137212 env[1317]: time="2025-09-09T00:43:32.137166505Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:43:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2885 runtime=io.containerd.runc.v2\n" Sep 9 00:43:32.165893 kubelet[2118]: W0909 00:43:32.165850 2118 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 00:43:32.170293 kubelet[2118]: E0909 00:43:32.170131 2118 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 00:43:32.242127 kubelet[2118]: I0909 00:43:32.242089 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/506277be-dd46-4716-b8b9-1f3976363568-tigera-ca-bundle\") pod \"calico-kube-controllers-89f6f49cb-svnf4\" (UID: \"506277be-dd46-4716-b8b9-1f3976363568\") " pod="calico-system/calico-kube-controllers-89f6f49cb-svnf4" Sep 9 00:43:32.242357 kubelet[2118]: I0909 00:43:32.242338 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjsrw\" (UniqueName: \"kubernetes.io/projected/767b79c6-02cc-4919-ae65-36b5295c2cf4-kube-api-access-fjsrw\") pod \"goldmane-7988f88666-v64bm\" (UID: \"767b79c6-02cc-4919-ae65-36b5295c2cf4\") " pod="calico-system/goldmane-7988f88666-v64bm" Sep 9 00:43:32.242469 kubelet[2118]: I0909 00:43:32.242455 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a8f799c-a3fc-45db-a183-e49514e8c126-whisker-backend-key-pair\") pod \"whisker-7bbf7966b7-gp29k\" (UID: \"9a8f799c-a3fc-45db-a183-e49514e8c126\") " pod="calico-system/whisker-7bbf7966b7-gp29k" Sep 9 00:43:32.242576 kubelet[2118]: I0909 00:43:32.242561 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a8f799c-a3fc-45db-a183-e49514e8c126-whisker-ca-bundle\") pod \"whisker-7bbf7966b7-gp29k\" (UID: \"9a8f799c-a3fc-45db-a183-e49514e8c126\") " pod="calico-system/whisker-7bbf7966b7-gp29k" Sep 9 00:43:32.242708 kubelet[2118]: I0909 00:43:32.242691 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10d1e4ec-cd2b-4e64-bfe4-0460fd03c044-config-volume\") pod \"coredns-7c65d6cfc9-wh2kv\" (UID: \"10d1e4ec-cd2b-4e64-bfe4-0460fd03c044\") " pod="kube-system/coredns-7c65d6cfc9-wh2kv" Sep 9 00:43:32.242837 kubelet[2118]: I0909 00:43:32.242810 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lm9d\" (UniqueName: \"kubernetes.io/projected/10d1e4ec-cd2b-4e64-bfe4-0460fd03c044-kube-api-access-7lm9d\") pod \"coredns-7c65d6cfc9-wh2kv\" (UID: \"10d1e4ec-cd2b-4e64-bfe4-0460fd03c044\") " pod="kube-system/coredns-7c65d6cfc9-wh2kv" Sep 9 00:43:32.242953 kubelet[2118]: I0909 00:43:32.242938 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b1d432c-5704-4859-93d8-421968ff17c6-config-volume\") pod \"coredns-7c65d6cfc9-2zss5\" (UID: \"2b1d432c-5704-4859-93d8-421968ff17c6\") " pod="kube-system/coredns-7c65d6cfc9-2zss5" Sep 9 00:43:32.243088 kubelet[2118]: I0909 00:43:32.243073 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5nr4\" (UniqueName: \"kubernetes.io/projected/53253ab8-84f6-4a5e-8e9a-c2b463038540-kube-api-access-s5nr4\") pod \"calico-apiserver-55cdd6bdb6-9k5zf\" (UID: \"53253ab8-84f6-4a5e-8e9a-c2b463038540\") " pod="calico-apiserver/calico-apiserver-55cdd6bdb6-9k5zf" Sep 9 00:43:32.243223 kubelet[2118]: I0909 00:43:32.243207 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6w6\" (UniqueName: \"kubernetes.io/projected/ec54d5e2-70bd-4445-9ea0-62cda1c0ae32-kube-api-access-xk6w6\") pod \"calico-apiserver-55cdd6bdb6-td7gz\" (UID: \"ec54d5e2-70bd-4445-9ea0-62cda1c0ae32\") " pod="calico-apiserver/calico-apiserver-55cdd6bdb6-td7gz" Sep 9 00:43:32.243337 kubelet[2118]: I0909 00:43:32.243322 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/767b79c6-02cc-4919-ae65-36b5295c2cf4-goldmane-key-pair\") pod \"goldmane-7988f88666-v64bm\" (UID: \"767b79c6-02cc-4919-ae65-36b5295c2cf4\") " pod="calico-system/goldmane-7988f88666-v64bm" Sep 9 00:43:32.243443 kubelet[2118]: I0909 00:43:32.243430 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56skw\" (UniqueName: \"kubernetes.io/projected/2b1d432c-5704-4859-93d8-421968ff17c6-kube-api-access-56skw\") pod \"coredns-7c65d6cfc9-2zss5\" (UID: \"2b1d432c-5704-4859-93d8-421968ff17c6\") " pod="kube-system/coredns-7c65d6cfc9-2zss5" Sep 9 00:43:32.243548 kubelet[2118]: I0909 00:43:32.243532 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/767b79c6-02cc-4919-ae65-36b5295c2cf4-config\") pod \"goldmane-7988f88666-v64bm\" (UID: \"767b79c6-02cc-4919-ae65-36b5295c2cf4\") " pod="calico-system/goldmane-7988f88666-v64bm" Sep 9 00:43:32.243665 kubelet[2118]: I0909 00:43:32.243650 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/53253ab8-84f6-4a5e-8e9a-c2b463038540-calico-apiserver-certs\") pod \"calico-apiserver-55cdd6bdb6-9k5zf\" (UID: \"53253ab8-84f6-4a5e-8e9a-c2b463038540\") " pod="calico-apiserver/calico-apiserver-55cdd6bdb6-9k5zf" Sep 9 00:43:32.243783 kubelet[2118]: I0909 00:43:32.243768 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec54d5e2-70bd-4445-9ea0-62cda1c0ae32-calico-apiserver-certs\") pod \"calico-apiserver-55cdd6bdb6-td7gz\" (UID: \"ec54d5e2-70bd-4445-9ea0-62cda1c0ae32\") " pod="calico-apiserver/calico-apiserver-55cdd6bdb6-td7gz" Sep 9 00:43:32.243893 kubelet[2118]: I0909 00:43:32.243878 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767b79c6-02cc-4919-ae65-36b5295c2cf4-goldmane-ca-bundle\") pod \"goldmane-7988f88666-v64bm\" (UID: \"767b79c6-02cc-4919-ae65-36b5295c2cf4\") " pod="calico-system/goldmane-7988f88666-v64bm" Sep 9 00:43:32.244039 kubelet[2118]: I0909 00:43:32.244021 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhv5h\" (UniqueName: \"kubernetes.io/projected/506277be-dd46-4716-b8b9-1f3976363568-kube-api-access-fhv5h\") pod \"calico-kube-controllers-89f6f49cb-svnf4\" (UID: \"506277be-dd46-4716-b8b9-1f3976363568\") " pod="calico-system/calico-kube-controllers-89f6f49cb-svnf4" Sep 9 00:43:32.244173 kubelet[2118]: I0909 00:43:32.244156 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b6ts\" (UniqueName: \"kubernetes.io/projected/9a8f799c-a3fc-45db-a183-e49514e8c126-kube-api-access-2b6ts\") pod \"whisker-7bbf7966b7-gp29k\" (UID: \"9a8f799c-a3fc-45db-a183-e49514e8c126\") " pod="calico-system/whisker-7bbf7966b7-gp29k" Sep 9 00:43:32.471725 env[1317]: time="2025-09-09T00:43:32.470009445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cdd6bdb6-td7gz,Uid:ec54d5e2-70bd-4445-9ea0-62cda1c0ae32,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:43:32.474418 env[1317]: time="2025-09-09T00:43:32.474219073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-v64bm,Uid:767b79c6-02cc-4919-ae65-36b5295c2cf4,Namespace:calico-system,Attempt:0,}" Sep 9 00:43:32.476360 env[1317]: time="2025-09-09T00:43:32.476322787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cdd6bdb6-9k5zf,Uid:53253ab8-84f6-4a5e-8e9a-c2b463038540,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:43:32.479628 env[1317]: time="2025-09-09T00:43:32.479437339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bbf7966b7-gp29k,Uid:9a8f799c-a3fc-45db-a183-e49514e8c126,Namespace:calico-system,Attempt:0,}" Sep 9 00:43:32.485654 env[1317]: time="2025-09-09T00:43:32.485407323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-89f6f49cb-svnf4,Uid:506277be-dd46-4716-b8b9-1f3976363568,Namespace:calico-system,Attempt:0,}" Sep 9 00:43:32.587129 env[1317]: time="2025-09-09T00:43:32.587053128Z" level=error msg="Failed to destroy network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.587892 env[1317]: time="2025-09-09T00:43:32.587728166Z" level=error msg="encountered an error cleaning up failed sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.587892 env[1317]: time="2025-09-09T00:43:32.587779486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cdd6bdb6-td7gz,Uid:ec54d5e2-70bd-4445-9ea0-62cda1c0ae32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.588093 kubelet[2118]: E0909 00:43:32.588026 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.588837 kubelet[2118]: E0909 00:43:32.588799 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-td7gz" Sep 9 00:43:32.588918 kubelet[2118]: E0909 00:43:32.588840 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-td7gz" Sep 9 00:43:32.588918 kubelet[2118]: E0909 00:43:32.588887 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55cdd6bdb6-td7gz_calico-apiserver(ec54d5e2-70bd-4445-9ea0-62cda1c0ae32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55cdd6bdb6-td7gz_calico-apiserver(ec54d5e2-70bd-4445-9ea0-62cda1c0ae32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-td7gz" podUID="ec54d5e2-70bd-4445-9ea0-62cda1c0ae32" Sep 9 00:43:32.598972 env[1317]: time="2025-09-09T00:43:32.598916936Z" level=error msg="Failed to destroy network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.599336 env[1317]: time="2025-09-09T00:43:32.599298255Z" level=error msg="encountered an error cleaning up failed sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.599411 env[1317]: time="2025-09-09T00:43:32.599350775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-v64bm,Uid:767b79c6-02cc-4919-ae65-36b5295c2cf4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.599566 kubelet[2118]: E0909 00:43:32.599530 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.599643 kubelet[2118]: E0909 00:43:32.599581 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-v64bm" Sep 9 00:43:32.599643 kubelet[2118]: E0909 00:43:32.599601 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-v64bm" Sep 9 00:43:32.599727 kubelet[2118]: E0909 00:43:32.599645 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-v64bm_calico-system(767b79c6-02cc-4919-ae65-36b5295c2cf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-v64bm_calico-system(767b79c6-02cc-4919-ae65-36b5295c2cf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-v64bm" podUID="767b79c6-02cc-4919-ae65-36b5295c2cf4" Sep 9 00:43:32.601479 env[1317]: time="2025-09-09T00:43:32.601309569Z" level=error msg="Failed to destroy network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.601663 env[1317]: time="2025-09-09T00:43:32.601627368Z" level=error msg="encountered an error cleaning up failed sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.601698 env[1317]: time="2025-09-09T00:43:32.601676568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cdd6bdb6-9k5zf,Uid:53253ab8-84f6-4a5e-8e9a-c2b463038540,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.601867 kubelet[2118]: E0909 00:43:32.601841 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.601907 kubelet[2118]: E0909 00:43:32.601896 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-9k5zf" Sep 9 00:43:32.602517 kubelet[2118]: E0909 00:43:32.601912 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-9k5zf" Sep 9 00:43:32.602517 kubelet[2118]: E0909 00:43:32.601950 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55cdd6bdb6-9k5zf_calico-apiserver(53253ab8-84f6-4a5e-8e9a-c2b463038540)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55cdd6bdb6-9k5zf_calico-apiserver(53253ab8-84f6-4a5e-8e9a-c2b463038540)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-9k5zf" podUID="53253ab8-84f6-4a5e-8e9a-c2b463038540" Sep 9 00:43:32.610945 env[1317]: time="2025-09-09T00:43:32.610902543Z" level=error msg="Failed to destroy network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.611373 env[1317]: time="2025-09-09T00:43:32.611338102Z" level=error msg="encountered an error cleaning up failed sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.611487 env[1317]: time="2025-09-09T00:43:32.611459542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-89f6f49cb-svnf4,Uid:506277be-dd46-4716-b8b9-1f3976363568,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.611745 kubelet[2118]: E0909 00:43:32.611711 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.611802 kubelet[2118]: E0909 00:43:32.611756 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-89f6f49cb-svnf4" Sep 9 00:43:32.611802 kubelet[2118]: E0909 00:43:32.611784 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-89f6f49cb-svnf4" Sep 9 00:43:32.611861 kubelet[2118]: E0909 00:43:32.611820 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-89f6f49cb-svnf4_calico-system(506277be-dd46-4716-b8b9-1f3976363568)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-89f6f49cb-svnf4_calico-system(506277be-dd46-4716-b8b9-1f3976363568)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-89f6f49cb-svnf4" podUID="506277be-dd46-4716-b8b9-1f3976363568" Sep 9 00:43:32.616636 env[1317]: time="2025-09-09T00:43:32.616579928Z" level=error msg="Failed to destroy network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.617030 env[1317]: time="2025-09-09T00:43:32.616994647Z" level=error msg="encountered an error cleaning up failed sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.617171 env[1317]: time="2025-09-09T00:43:32.617129767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bbf7966b7-gp29k,Uid:9a8f799c-a3fc-45db-a183-e49514e8c126,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.617414 kubelet[2118]: E0909 00:43:32.617380 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.617472 kubelet[2118]: E0909 00:43:32.617426 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bbf7966b7-gp29k" Sep 9 00:43:32.617472 kubelet[2118]: E0909 00:43:32.617444 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bbf7966b7-gp29k" Sep 9 00:43:32.617531 kubelet[2118]: E0909 00:43:32.617486 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bbf7966b7-gp29k_calico-system(9a8f799c-a3fc-45db-a183-e49514e8c126)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bbf7966b7-gp29k_calico-system(9a8f799c-a3fc-45db-a183-e49514e8c126)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bbf7966b7-gp29k" podUID="9a8f799c-a3fc-45db-a183-e49514e8c126" Sep 9 00:43:32.848255 env[1317]: time="2025-09-09T00:43:32.848214581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:43:32.849871 kubelet[2118]: I0909 00:43:32.849840 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:43:32.850905 env[1317]: time="2025-09-09T00:43:32.850853014Z" level=info msg="StopPodSandbox for \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\"" Sep 9 00:43:32.853898 kubelet[2118]: I0909 00:43:32.853873 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:43:32.854644 env[1317]: time="2025-09-09T00:43:32.854613924Z" level=info msg="StopPodSandbox for \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\"" Sep 9 00:43:32.858628 kubelet[2118]: I0909 00:43:32.858602 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:43:32.860186 env[1317]: time="2025-09-09T00:43:32.859381591Z" level=info msg="StopPodSandbox for \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\"" Sep 9 00:43:32.863490 kubelet[2118]: I0909 00:43:32.863468 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:43:32.864036 env[1317]: time="2025-09-09T00:43:32.863967379Z" level=info msg="StopPodSandbox for \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\"" Sep 9 00:43:32.865252 kubelet[2118]: I0909 00:43:32.865174 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:43:32.865689 env[1317]: time="2025-09-09T00:43:32.865649014Z" level=info msg="StopPodSandbox for \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\"" Sep 9 00:43:32.890144 env[1317]: time="2025-09-09T00:43:32.890073588Z" level=error msg="StopPodSandbox for \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\" failed" error="failed to destroy network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.890366 kubelet[2118]: E0909 00:43:32.890323 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:43:32.890432 kubelet[2118]: E0909 00:43:32.890390 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a"} Sep 9 00:43:32.890470 kubelet[2118]: E0909 00:43:32.890451 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec54d5e2-70bd-4445-9ea0-62cda1c0ae32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:43:32.890534 kubelet[2118]: E0909 00:43:32.890475 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec54d5e2-70bd-4445-9ea0-62cda1c0ae32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-td7gz" podUID="ec54d5e2-70bd-4445-9ea0-62cda1c0ae32" Sep 9 00:43:32.895907 env[1317]: time="2025-09-09T00:43:32.895852932Z" level=error msg="StopPodSandbox for \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\" failed" error="failed to destroy network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.896047 env[1317]: time="2025-09-09T00:43:32.896018452Z" level=error msg="StopPodSandbox for \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\" failed" error="failed to destroy network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.896116 kubelet[2118]: E0909 00:43:32.896082 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:43:32.896187 kubelet[2118]: E0909 00:43:32.896127 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8"} Sep 9 00:43:32.896187 kubelet[2118]: E0909 00:43:32.896153 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:43:32.896248 kubelet[2118]: E0909 00:43:32.896189 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044"} Sep 9 00:43:32.896248 kubelet[2118]: E0909 00:43:32.896218 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a8f799c-a3fc-45db-a183-e49514e8c126\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:43:32.896248 kubelet[2118]: E0909 00:43:32.896238 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a8f799c-a3fc-45db-a183-e49514e8c126\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bbf7966b7-gp29k" podUID="9a8f799c-a3fc-45db-a183-e49514e8c126" Sep 9 00:43:32.896364 kubelet[2118]: E0909 00:43:32.896165 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"506277be-dd46-4716-b8b9-1f3976363568\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:43:32.896364 kubelet[2118]: E0909 00:43:32.896282 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"506277be-dd46-4716-b8b9-1f3976363568\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-89f6f49cb-svnf4" podUID="506277be-dd46-4716-b8b9-1f3976363568" Sep 9 00:43:32.913098 env[1317]: time="2025-09-09T00:43:32.913045286Z" level=error msg="StopPodSandbox for \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\" failed" error="failed to destroy network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.913272 kubelet[2118]: E0909 00:43:32.913234 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:43:32.913325 kubelet[2118]: E0909 00:43:32.913282 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2"} Sep 9 00:43:32.913325 kubelet[2118]: E0909 00:43:32.913310 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"767b79c6-02cc-4919-ae65-36b5295c2cf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:43:32.913405 kubelet[2118]: E0909 00:43:32.913331 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"767b79c6-02cc-4919-ae65-36b5295c2cf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-v64bm" podUID="767b79c6-02cc-4919-ae65-36b5295c2cf4" Sep 9 00:43:32.913618 env[1317]: time="2025-09-09T00:43:32.913584844Z" level=error msg="StopPodSandbox for \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\" failed" error="failed to destroy network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:32.913732 kubelet[2118]: E0909 00:43:32.913711 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:43:32.913758 kubelet[2118]: E0909 00:43:32.913736 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac"} Sep 9 00:43:32.913783 kubelet[2118]: E0909 00:43:32.913759 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53253ab8-84f6-4a5e-8e9a-c2b463038540\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:43:32.913827 kubelet[2118]: E0909 00:43:32.913778 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53253ab8-84f6-4a5e-8e9a-c2b463038540\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-9k5zf" podUID="53253ab8-84f6-4a5e-8e9a-c2b463038540" Sep 9 00:43:33.347702 kubelet[2118]: E0909 00:43:33.347659 2118 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 00:43:33.347839 kubelet[2118]: E0909 00:43:33.347772 2118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b1d432c-5704-4859-93d8-421968ff17c6-config-volume podName:2b1d432c-5704-4859-93d8-421968ff17c6 nodeName:}" failed. No retries permitted until 2025-09-09 00:43:33.847752747 +0000 UTC m=+27.175739479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2b1d432c-5704-4859-93d8-421968ff17c6-config-volume") pod "coredns-7c65d6cfc9-2zss5" (UID: "2b1d432c-5704-4859-93d8-421968ff17c6") : failed to sync configmap cache: timed out waiting for the condition Sep 9 00:43:33.347950 kubelet[2118]: E0909 00:43:33.347930 2118 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 00:43:33.348081 kubelet[2118]: E0909 00:43:33.348068 2118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10d1e4ec-cd2b-4e64-bfe4-0460fd03c044-config-volume podName:10d1e4ec-cd2b-4e64-bfe4-0460fd03c044 nodeName:}" failed. No retries permitted until 2025-09-09 00:43:33.848052626 +0000 UTC m=+27.176039358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/10d1e4ec-cd2b-4e64-bfe4-0460fd03c044-config-volume") pod "coredns-7c65d6cfc9-wh2kv" (UID: "10d1e4ec-cd2b-4e64-bfe4-0460fd03c044") : failed to sync configmap cache: timed out waiting for the condition Sep 9 00:43:33.783065 env[1317]: time="2025-09-09T00:43:33.782947456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b44f5,Uid:5fcbd175-b1d0-445a-87d8-30edc58c5294,Namespace:calico-system,Attempt:0,}" Sep 9 00:43:33.953918 env[1317]: time="2025-09-09T00:43:33.953861371Z" level=error msg="Failed to destroy network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:33.958742 env[1317]: time="2025-09-09T00:43:33.954288050Z" level=error msg="encountered an error cleaning up failed sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:33.958742 env[1317]: time="2025-09-09T00:43:33.954332290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b44f5,Uid:5fcbd175-b1d0-445a-87d8-30edc58c5294,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:33.956118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5-shm.mount: Deactivated successfully. Sep 9 00:43:33.959108 kubelet[2118]: E0909 00:43:33.954595 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:33.959108 kubelet[2118]: E0909 00:43:33.954654 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b44f5" Sep 9 00:43:33.959108 kubelet[2118]: E0909 00:43:33.954675 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b44f5" Sep 9 00:43:33.959392 kubelet[2118]: E0909 00:43:33.954714 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b44f5_calico-system(5fcbd175-b1d0-445a-87d8-30edc58c5294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b44f5_calico-system(5fcbd175-b1d0-445a-87d8-30edc58c5294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b44f5" podUID="5fcbd175-b1d0-445a-87d8-30edc58c5294" Sep 9 00:43:33.964398 kubelet[2118]: E0909 00:43:33.964147 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:33.964762 env[1317]: time="2025-09-09T00:43:33.964706663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wh2kv,Uid:10d1e4ec-cd2b-4e64-bfe4-0460fd03c044,Namespace:kube-system,Attempt:0,}" Sep 9 00:43:33.974705 kubelet[2118]: E0909 00:43:33.974665 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:33.975596 env[1317]: time="2025-09-09T00:43:33.975536475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2zss5,Uid:2b1d432c-5704-4859-93d8-421968ff17c6,Namespace:kube-system,Attempt:0,}" Sep 9 00:43:34.043291 env[1317]: time="2025-09-09T00:43:34.043174664Z" level=error msg="Failed to destroy network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.044489 env[1317]: time="2025-09-09T00:43:34.044446860Z" level=error msg="encountered an error cleaning up failed sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.044570 env[1317]: time="2025-09-09T00:43:34.044528260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wh2kv,Uid:10d1e4ec-cd2b-4e64-bfe4-0460fd03c044,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.044854 kubelet[2118]: E0909 00:43:34.044796 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.044924 kubelet[2118]: E0909 00:43:34.044876 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wh2kv" Sep 9 00:43:34.044958 kubelet[2118]: E0909 00:43:34.044921 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wh2kv" Sep 9 00:43:34.045065 kubelet[2118]: E0909 00:43:34.045019 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wh2kv_kube-system(10d1e4ec-cd2b-4e64-bfe4-0460fd03c044)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wh2kv_kube-system(10d1e4ec-cd2b-4e64-bfe4-0460fd03c044)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wh2kv" podUID="10d1e4ec-cd2b-4e64-bfe4-0460fd03c044" Sep 9 00:43:34.100892 env[1317]: time="2025-09-09T00:43:34.100840519Z" level=error msg="Failed to destroy network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.101360 env[1317]: time="2025-09-09T00:43:34.101327638Z" level=error msg="encountered an error cleaning up failed sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.101477 env[1317]: time="2025-09-09T00:43:34.101450958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2zss5,Uid:2b1d432c-5704-4859-93d8-421968ff17c6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.101782 kubelet[2118]: E0909 00:43:34.101738 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.101862 kubelet[2118]: E0909 00:43:34.101809 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2zss5" Sep 9 00:43:34.101862 kubelet[2118]: E0909 00:43:34.101828 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2zss5" Sep 9 00:43:34.101922 kubelet[2118]: E0909 00:43:34.101880 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2zss5_kube-system(2b1d432c-5704-4859-93d8-421968ff17c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2zss5_kube-system(2b1d432c-5704-4859-93d8-421968ff17c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2zss5" podUID="2b1d432c-5704-4859-93d8-421968ff17c6" Sep 9 00:43:34.354244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775-shm.mount: Deactivated successfully. Sep 9 00:43:34.873592 kubelet[2118]: I0909 00:43:34.873552 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:43:34.876078 env[1317]: time="2025-09-09T00:43:34.874396866Z" level=info msg="StopPodSandbox for \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\"" Sep 9 00:43:34.877648 kubelet[2118]: I0909 00:43:34.877112 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:43:34.877755 env[1317]: time="2025-09-09T00:43:34.877536658Z" level=info msg="StopPodSandbox for \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\"" Sep 9 00:43:34.886039 kubelet[2118]: I0909 00:43:34.884234 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:43:34.890010 env[1317]: time="2025-09-09T00:43:34.888544951Z" level=info msg="StopPodSandbox for \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\"" Sep 9 00:43:34.938803 env[1317]: time="2025-09-09T00:43:34.938742705Z" level=error msg="StopPodSandbox for \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\" failed" error="failed to destroy network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.940108 kubelet[2118]: E0909 00:43:34.939880 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:43:34.940108 kubelet[2118]: E0909 00:43:34.939936 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e"} Sep 9 00:43:34.940108 kubelet[2118]: E0909 00:43:34.939970 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b1d432c-5704-4859-93d8-421968ff17c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:43:34.940108 kubelet[2118]: E0909 00:43:34.940043 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b1d432c-5704-4859-93d8-421968ff17c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2zss5" podUID="2b1d432c-5704-4859-93d8-421968ff17c6" Sep 9 00:43:34.948843 env[1317]: time="2025-09-09T00:43:34.948746960Z" level=error msg="StopPodSandbox for \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\" failed" error="failed to destroy network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.949197 kubelet[2118]: E0909 00:43:34.949082 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:43:34.949197 kubelet[2118]: E0909 00:43:34.949164 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5"} Sep 9 00:43:34.949473 kubelet[2118]: E0909 00:43:34.949296 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5fcbd175-b1d0-445a-87d8-30edc58c5294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:43:34.949576 kubelet[2118]: E0909 00:43:34.949329 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5fcbd175-b1d0-445a-87d8-30edc58c5294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b44f5" podUID="5fcbd175-b1d0-445a-87d8-30edc58c5294" Sep 9 00:43:34.956315 env[1317]: time="2025-09-09T00:43:34.956268901Z" level=error msg="StopPodSandbox for \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\" failed" error="failed to destroy network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:43:34.956519 kubelet[2118]: E0909 00:43:34.956489 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:43:34.956569 kubelet[2118]: E0909 00:43:34.956530 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775"} Sep 9 00:43:34.956603 kubelet[2118]: E0909 00:43:34.956571 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10d1e4ec-cd2b-4e64-bfe4-0460fd03c044\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:43:34.956654 kubelet[2118]: E0909 00:43:34.956591 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10d1e4ec-cd2b-4e64-bfe4-0460fd03c044\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wh2kv" podUID="10d1e4ec-cd2b-4e64-bfe4-0460fd03c044" Sep 9 00:43:37.075563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421176364.mount: Deactivated successfully. Sep 9 00:43:37.373294 env[1317]: time="2025-09-09T00:43:37.373073634Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:37.374556 env[1317]: time="2025-09-09T00:43:37.374531911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:37.375980 env[1317]: time="2025-09-09T00:43:37.375946548Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:37.377728 env[1317]: time="2025-09-09T00:43:37.377703584Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:37.378259 env[1317]: time="2025-09-09T00:43:37.378232223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 00:43:37.396421 env[1317]: time="2025-09-09T00:43:37.396385342Z" level=info msg="CreateContainer within sandbox \"4ae886f1bc6b5584c12074a266976f1e0fc786fdd64eea56b8bb169b7e1febca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:43:37.413511 env[1317]: time="2025-09-09T00:43:37.413471664Z" level=info msg="CreateContainer within sandbox \"4ae886f1bc6b5584c12074a266976f1e0fc786fdd64eea56b8bb169b7e1febca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2d0d2dfacbee8d568e807005ed149ffb92e30a97113c760c084c68f477849432\"" Sep 9 00:43:37.413947 env[1317]: time="2025-09-09T00:43:37.413900783Z" level=info msg="StartContainer for \"2d0d2dfacbee8d568e807005ed149ffb92e30a97113c760c084c68f477849432\"" Sep 9 00:43:37.486174 env[1317]: time="2025-09-09T00:43:37.486126982Z" level=info msg="StartContainer for \"2d0d2dfacbee8d568e807005ed149ffb92e30a97113c760c084c68f477849432\" returns successfully" Sep 9 00:43:37.598566 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:43:37.598765 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:43:37.689659 env[1317]: time="2025-09-09T00:43:37.689548647Z" level=info msg="StopPodSandbox for \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\"" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.787 [INFO][3392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.788 [INFO][3392] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" iface="eth0" netns="/var/run/netns/cni-b755c226-99f3-b09c-68e8-5c7d8eeda0ac" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.789 [INFO][3392] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" iface="eth0" netns="/var/run/netns/cni-b755c226-99f3-b09c-68e8-5c7d8eeda0ac" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.790 [INFO][3392] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" iface="eth0" netns="/var/run/netns/cni-b755c226-99f3-b09c-68e8-5c7d8eeda0ac" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.790 [INFO][3392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.790 [INFO][3392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.849 [INFO][3403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" HandleID="k8s-pod-network.8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Workload="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.849 [INFO][3403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.849 [INFO][3403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.858 [WARNING][3403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" HandleID="k8s-pod-network.8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Workload="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.858 [INFO][3403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" HandleID="k8s-pod-network.8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Workload="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.860 [INFO][3403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:37.863575 env[1317]: 2025-09-09 00:43:37.862 [INFO][3392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:43:37.864201 env[1317]: time="2025-09-09T00:43:37.864169656Z" level=info msg="TearDown network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\" successfully" Sep 9 00:43:37.864286 env[1317]: time="2025-09-09T00:43:37.864270536Z" level=info msg="StopPodSandbox for \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\" returns successfully" Sep 9 00:43:37.893598 kubelet[2118]: I0909 00:43:37.893567 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b6ts\" (UniqueName: \"kubernetes.io/projected/9a8f799c-a3fc-45db-a183-e49514e8c126-kube-api-access-2b6ts\") pod \"9a8f799c-a3fc-45db-a183-e49514e8c126\" (UID: \"9a8f799c-a3fc-45db-a183-e49514e8c126\") " Sep 9 00:43:37.893598 kubelet[2118]: I0909 00:43:37.893609 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a8f799c-a3fc-45db-a183-e49514e8c126-whisker-backend-key-pair\") pod \"9a8f799c-a3fc-45db-a183-e49514e8c126\" (UID: \"9a8f799c-a3fc-45db-a183-e49514e8c126\") " Sep 9 00:43:37.894042 kubelet[2118]: I0909 00:43:37.893632 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a8f799c-a3fc-45db-a183-e49514e8c126-whisker-ca-bundle\") pod \"9a8f799c-a3fc-45db-a183-e49514e8c126\" (UID: \"9a8f799c-a3fc-45db-a183-e49514e8c126\") " Sep 9 00:43:37.896313 kubelet[2118]: I0909 00:43:37.896251 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a8f799c-a3fc-45db-a183-e49514e8c126-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a8f799c-a3fc-45db-a183-e49514e8c126" (UID: "9a8f799c-a3fc-45db-a183-e49514e8c126"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:43:37.899721 kubelet[2118]: I0909 00:43:37.899572 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8f799c-a3fc-45db-a183-e49514e8c126-kube-api-access-2b6ts" (OuterVolumeSpecName: "kube-api-access-2b6ts") pod "9a8f799c-a3fc-45db-a183-e49514e8c126" (UID: "9a8f799c-a3fc-45db-a183-e49514e8c126"). InnerVolumeSpecName "kube-api-access-2b6ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:43:37.904148 kubelet[2118]: I0909 00:43:37.904118 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8f799c-a3fc-45db-a183-e49514e8c126-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a8f799c-a3fc-45db-a183-e49514e8c126" (UID: "9a8f799c-a3fc-45db-a183-e49514e8c126"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:43:37.915512 kubelet[2118]: I0909 00:43:37.915466 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g4zw6" podStartSLOduration=1.317114426 podStartE2EDuration="12.915443302s" podCreationTimestamp="2025-09-09 00:43:25 +0000 UTC" firstStartedPulling="2025-09-09 00:43:25.780581105 +0000 UTC m=+19.108567837" lastFinishedPulling="2025-09-09 00:43:37.378909981 +0000 UTC m=+30.706896713" observedRunningTime="2025-09-09 00:43:37.914871863 +0000 UTC m=+31.242858595" watchObservedRunningTime="2025-09-09 00:43:37.915443302 +0000 UTC m=+31.243430034" Sep 9 00:43:37.994794 kubelet[2118]: I0909 00:43:37.994692 2118 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a8f799c-a3fc-45db-a183-e49514e8c126-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:43:37.994794 kubelet[2118]: I0909 00:43:37.994721 2118 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a8f799c-a3fc-45db-a183-e49514e8c126-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:43:37.994794 kubelet[2118]: I0909 00:43:37.994732 2118 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b6ts\" (UniqueName: \"kubernetes.io/projected/9a8f799c-a3fc-45db-a183-e49514e8c126-kube-api-access-2b6ts\") on node \"localhost\" DevicePath \"\"" Sep 9 00:43:38.076512 systemd[1]: run-netns-cni\x2db755c226\x2d99f3\x2db09c\x2d68e8\x2d5c7d8eeda0ac.mount: Deactivated successfully. Sep 9 00:43:38.076646 systemd[1]: var-lib-kubelet-pods-9a8f799c\x2da3fc\x2d45db\x2da183\x2de49514e8c126-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2b6ts.mount: Deactivated successfully. Sep 9 00:43:38.076733 systemd[1]: var-lib-kubelet-pods-9a8f799c\x2da3fc\x2d45db\x2da183\x2de49514e8c126-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:43:38.237660 kubelet[2118]: W0909 00:43:38.237621 2118 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 9 00:43:38.237793 kubelet[2118]: E0909 00:43:38.237666 2118 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 00:43:38.237793 kubelet[2118]: W0909 00:43:38.237621 2118 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 9 00:43:38.237793 kubelet[2118]: E0909 00:43:38.237696 2118 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 00:43:38.297172 kubelet[2118]: I0909 00:43:38.297076 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ecee34e-cbcc-46bd-865c-e23df2d9a2fd-whisker-ca-bundle\") pod \"whisker-7b66464454-t4j9n\" (UID: \"9ecee34e-cbcc-46bd-865c-e23df2d9a2fd\") " pod="calico-system/whisker-7b66464454-t4j9n" Sep 9 00:43:38.297350 kubelet[2118]: I0909 00:43:38.297336 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmsn\" (UniqueName: \"kubernetes.io/projected/9ecee34e-cbcc-46bd-865c-e23df2d9a2fd-kube-api-access-fzmsn\") pod \"whisker-7b66464454-t4j9n\" (UID: \"9ecee34e-cbcc-46bd-865c-e23df2d9a2fd\") " pod="calico-system/whisker-7b66464454-t4j9n" Sep 9 00:43:38.297441 kubelet[2118]: I0909 00:43:38.297429 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ecee34e-cbcc-46bd-865c-e23df2d9a2fd-whisker-backend-key-pair\") pod \"whisker-7b66464454-t4j9n\" (UID: \"9ecee34e-cbcc-46bd-865c-e23df2d9a2fd\") " pod="calico-system/whisker-7b66464454-t4j9n" Sep 9 00:43:38.782415 kubelet[2118]: I0909 00:43:38.782377 2118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a8f799c-a3fc-45db-a183-e49514e8c126" path="/var/lib/kubelet/pods/9a8f799c-a3fc-45db-a183-e49514e8c126/volumes" Sep 9 00:43:38.900910 kubelet[2118]: I0909 00:43:38.900871 2118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:43:38.971000 audit[3474]: AVC avc: denied { write } for pid=3474 comm="tee" name="fd" dev="proc" ino=18989 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 9 00:43:38.971000 audit[3474]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffea3d27f5 a2=241 a3=1b6 items=1 ppid=3434 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:38.977624 kernel: audit: type=1400 audit(1757378618.971:288): avc: denied { write } for pid=3474 comm="tee" name="fd" dev="proc" ino=18989 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 9 00:43:38.977702 kernel: audit: type=1300 audit(1757378618.971:288): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffea3d27f5 a2=241 a3=1b6 items=1 ppid=3434 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:38.977732 kernel: audit: type=1307 audit(1757378618.971:288): cwd="/etc/service/enabled/felix/log" Sep 9 00:43:38.971000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 9 00:43:38.971000 audit: PATH item=0 name="/dev/fd/63" inode=18984 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:43:38.984670 kernel: audit: type=1302 audit(1757378618.971:288): item=0 name="/dev/fd/63" inode=18984 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:43:38.984744 kernel: audit: type=1327 audit(1757378618.971:288): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 9 00:43:38.971000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 9 00:43:38.997000 audit[3500]: AVC avc: denied { write } for pid=3500 comm="tee" name="fd" dev="proc" ino=20577 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 9 00:43:38.997000 audit[3500]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffee9637f5 a2=241 a3=1b6 items=1 ppid=3435 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:39.004178 kernel: audit: type=1400 audit(1757378618.997:289): avc: denied { write } for pid=3500 comm="tee" name="fd" dev="proc" ino=20577 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 9 00:43:39.004258 kernel: audit: type=1300 audit(1757378618.997:289): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffee9637f5 a2=241 a3=1b6 items=1 ppid=3435 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:39.004285 kernel: audit: type=1307 audit(1757378618.997:289): cwd="/etc/service/enabled/confd/log" Sep 9 00:43:38.997000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 9 00:43:38.997000 audit: PATH item=0 name="/dev/fd/63" inode=20572 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:43:39.007249 kernel: audit: type=1302 audit(1757378618.997:289): item=0 name="/dev/fd/63" inode=20572 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:43:38.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 9 00:43:39.010077 kernel: audit: type=1327 audit(1757378618.997:289): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 9 00:43:39.000000 audit[3507]: AVC avc: denied { write } for pid=3507 comm="tee" name="fd" dev="proc" ino=20585 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 9 00:43:39.000000 audit[3507]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe92407f6 a2=241 a3=1b6 items=1 ppid=3440 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:39.000000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 9 00:43:39.000000 audit: PATH item=0 name="/dev/fd/63" inode=19010 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:43:39.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 9 00:43:39.006000 audit[3494]: AVC avc: denied { write } for pid=3494 comm="tee" name="fd" dev="proc" ino=19016 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 9 00:43:39.006000 audit[3494]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe89747f7 a2=241 a3=1b6 items=1 ppid=3444 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:39.006000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 9 00:43:39.006000 audit: PATH item=0 name="/dev/fd/63" inode=19007 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:43:39.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 9 00:43:39.010000 audit[3506]: AVC avc: denied { write } for pid=3506 comm="tee" name="fd" dev="proc" ino=19652 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 9 00:43:39.010000 audit[3506]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffa2c87e5 a2=241 a3=1b6 items=1 ppid=3433 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:39.010000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 9 00:43:39.010000 audit: PATH item=0 name="/dev/fd/63" inode=19649 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:43:39.010000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 9 00:43:39.031000 audit[3512]: AVC avc: denied { write } for pid=3512 comm="tee" name="fd" dev="proc" ino=19020 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 9 00:43:39.031000 audit[3512]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc74b27f5 a2=241 a3=1b6 items=1 ppid=3439 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:39.031000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 9 00:43:39.031000 audit: PATH item=0 name="/dev/fd/63" inode=20587 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:43:39.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 9 00:43:39.061000 audit[3510]: AVC avc: denied { write } for pid=3510 comm="tee" name="fd" dev="proc" ino=20592 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 9 00:43:39.061000 audit[3510]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe5aa37e6 a2=241 a3=1b6 items=1 ppid=3446 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:39.061000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 9 00:43:39.061000 audit: PATH item=0 name="/dev/fd/63" inode=19013 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:43:39.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 9 00:43:39.401487 kubelet[2118]: E0909 00:43:39.401452 2118 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 9 00:43:39.401715 kubelet[2118]: E0909 00:43:39.401701 2118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ecee34e-cbcc-46bd-865c-e23df2d9a2fd-whisker-backend-key-pair podName:9ecee34e-cbcc-46bd-865c-e23df2d9a2fd nodeName:}" failed. No retries permitted until 2025-09-09 00:43:39.901677796 +0000 UTC m=+33.229664528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/9ecee34e-cbcc-46bd-865c-e23df2d9a2fd-whisker-backend-key-pair") pod "whisker-7b66464454-t4j9n" (UID: "9ecee34e-cbcc-46bd-865c-e23df2d9a2fd") : failed to sync secret cache: timed out waiting for the condition Sep 9 00:43:40.038930 env[1317]: time="2025-09-09T00:43:40.038567390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b66464454-t4j9n,Uid:9ecee34e-cbcc-46bd-865c-e23df2d9a2fd,Namespace:calico-system,Attempt:0,}" Sep 9 00:43:40.182880 systemd-networkd[1097]: cali22eacc8d558: Link UP Sep 9 00:43:40.183328 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:43:40.183364 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali22eacc8d558: link becomes ready Sep 9 00:43:40.183111 systemd-networkd[1097]: cali22eacc8d558: Gained carrier Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.073 [INFO][3527] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.089 [INFO][3527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7b66464454--t4j9n-eth0 whisker-7b66464454- calico-system 9ecee34e-cbcc-46bd-865c-e23df2d9a2fd 923 0 2025-09-09 00:43:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b66464454 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7b66464454-t4j9n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali22eacc8d558 [] [] }} ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Namespace="calico-system" Pod="whisker-7b66464454-t4j9n" WorkloadEndpoint="localhost-k8s-whisker--7b66464454--t4j9n-" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.089 [INFO][3527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Namespace="calico-system" Pod="whisker-7b66464454-t4j9n" WorkloadEndpoint="localhost-k8s-whisker--7b66464454--t4j9n-eth0" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.137 [INFO][3550] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" HandleID="k8s-pod-network.ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Workload="localhost-k8s-whisker--7b66464454--t4j9n-eth0" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.137 [INFO][3550] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" HandleID="k8s-pod-network.ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Workload="localhost-k8s-whisker--7b66464454--t4j9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004822d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7b66464454-t4j9n", "timestamp":"2025-09-09 00:43:40.13761763 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.137 [INFO][3550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.138 [INFO][3550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.138 [INFO][3550] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.147 [INFO][3550] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" host="localhost" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.153 [INFO][3550] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.158 [INFO][3550] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.159 [INFO][3550] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.161 [INFO][3550] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.161 [INFO][3550] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" host="localhost" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.162 [INFO][3550] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2 Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.166 [INFO][3550] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" host="localhost" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.170 [INFO][3550] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" host="localhost" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.170 [INFO][3550] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" host="localhost" Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.171 [INFO][3550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:40.202147 env[1317]: 2025-09-09 00:43:40.171 [INFO][3550] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" HandleID="k8s-pod-network.ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Workload="localhost-k8s-whisker--7b66464454--t4j9n-eth0" Sep 9 00:43:40.204054 env[1317]: 2025-09-09 00:43:40.174 [INFO][3527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Namespace="calico-system" Pod="whisker-7b66464454-t4j9n" WorkloadEndpoint="localhost-k8s-whisker--7b66464454--t4j9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b66464454--t4j9n-eth0", GenerateName:"whisker-7b66464454-", Namespace:"calico-system", SelfLink:"", UID:"9ecee34e-cbcc-46bd-865c-e23df2d9a2fd", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b66464454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7b66464454-t4j9n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22eacc8d558", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:40.204054 env[1317]: 2025-09-09 00:43:40.174 [INFO][3527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Namespace="calico-system" Pod="whisker-7b66464454-t4j9n" WorkloadEndpoint="localhost-k8s-whisker--7b66464454--t4j9n-eth0" Sep 9 00:43:40.204054 env[1317]: 2025-09-09 00:43:40.174 [INFO][3527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22eacc8d558 ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Namespace="calico-system" Pod="whisker-7b66464454-t4j9n" WorkloadEndpoint="localhost-k8s-whisker--7b66464454--t4j9n-eth0" Sep 9 00:43:40.204054 env[1317]: 2025-09-09 00:43:40.182 [INFO][3527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Namespace="calico-system" Pod="whisker-7b66464454-t4j9n" WorkloadEndpoint="localhost-k8s-whisker--7b66464454--t4j9n-eth0" Sep 9 00:43:40.204054 env[1317]: 2025-09-09 00:43:40.183 [INFO][3527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Namespace="calico-system" Pod="whisker-7b66464454-t4j9n" WorkloadEndpoint="localhost-k8s-whisker--7b66464454--t4j9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b66464454--t4j9n-eth0", GenerateName:"whisker-7b66464454-", Namespace:"calico-system", SelfLink:"", UID:"9ecee34e-cbcc-46bd-865c-e23df2d9a2fd", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b66464454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2", Pod:"whisker-7b66464454-t4j9n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22eacc8d558", MAC:"72:a1:55:d8:5e:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:40.204054 env[1317]: 2025-09-09 00:43:40.200 [INFO][3527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2" Namespace="calico-system" Pod="whisker-7b66464454-t4j9n" WorkloadEndpoint="localhost-k8s-whisker--7b66464454--t4j9n-eth0" Sep 9 00:43:40.221178 env[1317]: time="2025-09-09T00:43:40.221122141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:40.221298 env[1317]: time="2025-09-09T00:43:40.221167901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:40.221368 env[1317]: time="2025-09-09T00:43:40.221339461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:40.221584 env[1317]: time="2025-09-09T00:43:40.221551140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2 pid=3589 runtime=io.containerd.runc.v2 Sep 9 00:43:40.248531 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:43:40.265937 env[1317]: time="2025-09-09T00:43:40.265888371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b66464454-t4j9n,Uid:9ecee34e-cbcc-46bd-865c-e23df2d9a2fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2\"" Sep 9 00:43:40.268574 env[1317]: time="2025-09-09T00:43:40.267498808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:43:41.192647 env[1317]: time="2025-09-09T00:43:41.192595112Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:41.195421 env[1317]: time="2025-09-09T00:43:41.195372627Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:41.198815 env[1317]: time="2025-09-09T00:43:41.198482101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:41.199535 env[1317]: time="2025-09-09T00:43:41.199500539Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:41.200422 env[1317]: time="2025-09-09T00:43:41.200390097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 00:43:41.204500 env[1317]: time="2025-09-09T00:43:41.204473329Z" level=info msg="CreateContainer within sandbox \"ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:43:41.218411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3087321788.mount: Deactivated successfully. Sep 9 00:43:41.229458 env[1317]: time="2025-09-09T00:43:41.229412680Z" level=info msg="CreateContainer within sandbox \"ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bf4fdbb455fb1266749126bf08c9a9e6aca1c2aaee8d59c6d087c4c6246fd831\"" Sep 9 00:43:41.230199 env[1317]: time="2025-09-09T00:43:41.230152599Z" level=info msg="StartContainer for \"bf4fdbb455fb1266749126bf08c9a9e6aca1c2aaee8d59c6d087c4c6246fd831\"" Sep 9 00:43:41.273461 kubelet[2118]: I0909 00:43:41.272426 2118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:43:41.273461 kubelet[2118]: E0909 00:43:41.272786 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:41.307883 env[1317]: time="2025-09-09T00:43:41.307191408Z" level=info msg="StartContainer for \"bf4fdbb455fb1266749126bf08c9a9e6aca1c2aaee8d59c6d087c4c6246fd831\" returns successfully" Sep 9 00:43:41.307883 env[1317]: time="2025-09-09T00:43:41.308637966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:43:41.329000 audit[3681]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:41.329000 audit[3681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffde81ec20 a2=0 a3=1 items=0 ppid=2268 pid=3681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:41.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:41.333000 audit[3681]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:41.333000 audit[3681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffde81ec20 a2=0 a3=1 items=0 ppid=2268 pid=3681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:41.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:41.908390 systemd-networkd[1097]: cali22eacc8d558: Gained IPv6LL Sep 9 00:43:41.909590 kubelet[2118]: E0909 00:43:41.909564 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:42.249000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.249000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.249000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.249000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.249000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.249000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.249000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.249000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.249000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.249000 audit: BPF prog-id=10 op=LOAD Sep 9 00:43:42.249000 audit[3712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4e231d8 a2=98 a3=fffff4e231c8 items=0 ppid=3688 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.249000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 9 00:43:42.251000 audit: BPF prog-id=10 op=UNLOAD Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit: BPF prog-id=11 op=LOAD Sep 9 00:43:42.251000 audit[3712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4e23088 a2=74 a3=95 items=0 ppid=3688 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 9 00:43:42.251000 audit: BPF prog-id=11 op=UNLOAD Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit: BPF prog-id=12 op=LOAD Sep 9 00:43:42.251000 audit[3712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4e230b8 a2=40 a3=fffff4e230e8 items=0 ppid=3688 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 9 00:43:42.251000 audit: BPF prog-id=12 op=UNLOAD Sep 9 00:43:42.251000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.251000 audit[3712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=fffff4e231d0 a2=50 a3=0 items=0 ppid=3688 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit: BPF prog-id=13 op=LOAD Sep 9 00:43:42.257000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd899ffd8 a2=98 a3=ffffd899ffc8 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.257000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.257000 audit: BPF prog-id=13 op=UNLOAD Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit: BPF prog-id=14 op=LOAD Sep 9 00:43:42.257000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd899fc68 a2=74 a3=95 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.257000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.257000 audit: BPF prog-id=14 op=UNLOAD Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.257000 audit: BPF prog-id=15 op=LOAD Sep 9 00:43:42.257000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd899fcc8 a2=94 a3=2 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.257000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.257000 audit: BPF prog-id=15 op=UNLOAD Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit: BPF prog-id=16 op=LOAD Sep 9 00:43:42.357000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd899fc88 a2=40 a3=ffffd899fcb8 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.357000 audit: BPF prog-id=16 op=UNLOAD Sep 9 00:43:42.357000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.357000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd899fda0 a2=50 a3=0 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd899fcf8 a2=28 a3=ffffd899fe28 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd899fd28 a2=28 a3=ffffd899fe58 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd899fbd8 a2=28 a3=ffffd899fd08 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd899fd48 a2=28 a3=ffffd899fe78 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd899fd28 a2=28 a3=ffffd899fe58 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd899fd18 a2=28 a3=ffffd899fe48 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd899fd48 a2=28 a3=ffffd899fe78 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd899fd28 a2=28 a3=ffffd899fe58 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd899fd48 a2=28 a3=ffffd899fe78 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd899fd18 a2=28 a3=ffffd899fe48 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd899fd98 a2=28 a3=ffffd899fed8 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd899fad0 a2=50 a3=0 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit: BPF prog-id=17 op=LOAD Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd899fad8 a2=94 a3=5 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.369000 audit: BPF prog-id=17 op=UNLOAD Sep 9 00:43:42.369000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.369000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd899fbe0 a2=50 a3=0 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.369000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd899fd28 a2=4 a3=3 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.370000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { confidentiality } for pid=3713 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 9 00:43:42.370000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd899fd08 a2=94 a3=6 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.370000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { confidentiality } for pid=3713 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 9 00:43:42.370000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd899f4d8 a2=94 a3=83 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.370000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.370000 audit[3713]: AVC avc: denied { confidentiality } for pid=3713 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 9 00:43:42.370000 audit[3713]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd899f4d8 a2=94 a3=83 items=0 ppid=3688 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.370000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit: BPF prog-id=18 op=LOAD Sep 9 00:43:42.381000 audit[3740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0696ee8 a2=98 a3=fffff0696ed8 items=0 ppid=3688 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.381000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 9 00:43:42.381000 audit: BPF prog-id=18 op=UNLOAD Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit: BPF prog-id=19 op=LOAD Sep 9 00:43:42.381000 audit[3740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0696d98 a2=74 a3=95 items=0 ppid=3688 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.381000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 9 00:43:42.381000 audit: BPF prog-id=19 op=UNLOAD Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.381000 audit: BPF prog-id=20 op=LOAD Sep 9 00:43:42.381000 audit[3740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0696dc8 a2=40 a3=fffff0696df8 items=0 ppid=3688 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.381000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 9 00:43:42.381000 audit: BPF prog-id=20 op=UNLOAD Sep 9 00:43:42.446022 systemd-networkd[1097]: vxlan.calico: Link UP Sep 9 00:43:42.446031 systemd-networkd[1097]: vxlan.calico: Gained carrier Sep 9 00:43:42.460000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.460000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.460000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.460000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.460000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.460000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.460000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.460000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.460000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.460000 audit: BPF prog-id=21 op=LOAD Sep 9 00:43:42.460000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6939c18 a2=98 a3=fffff6939c08 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.460000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit: BPF prog-id=21 op=UNLOAD Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit: BPF prog-id=22 op=LOAD Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff69398f8 a2=74 a3=95 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit: BPF prog-id=22 op=UNLOAD Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit: BPF prog-id=23 op=LOAD Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6939958 a2=94 a3=2 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit: BPF prog-id=23 op=UNLOAD Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff6939988 a2=28 a3=fffff6939ab8 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff69399b8 a2=28 a3=fffff6939ae8 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff6939868 a2=28 a3=fffff6939998 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff69399d8 a2=28 a3=fffff6939b08 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff69399b8 a2=28 a3=fffff6939ae8 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff69399a8 a2=28 a3=fffff6939ad8 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff69399d8 a2=28 a3=fffff6939b08 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff69399b8 a2=28 a3=fffff6939ae8 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff69399d8 a2=28 a3=fffff6939b08 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff69399a8 a2=28 a3=fffff6939ad8 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff6939a28 a2=28 a3=fffff6939b68 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.463000 audit: BPF prog-id=24 op=LOAD Sep 9 00:43:42.463000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6939848 a2=40 a3=fffff6939878 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.463000 audit: BPF prog-id=24 op=UNLOAD Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=fffff6939870 a2=50 a3=0 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=fffff6939870 a2=50 a3=0 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit: BPF prog-id=25 op=LOAD Sep 9 00:43:42.464000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6938fd8 a2=94 a3=2 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.464000 audit: BPF prog-id=25 op=UNLOAD Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.464000 audit: BPF prog-id=26 op=LOAD Sep 9 00:43:42.464000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6939168 a2=94 a3=30 items=0 ppid=3688 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit: BPF prog-id=27 op=LOAD Sep 9 00:43:42.467000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffde613668 a2=98 a3=ffffde613658 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.467000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.467000 audit: BPF prog-id=27 op=UNLOAD Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit: BPF prog-id=28 op=LOAD Sep 9 00:43:42.467000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffde6132f8 a2=74 a3=95 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.467000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.467000 audit: BPF prog-id=28 op=UNLOAD Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.467000 audit: BPF prog-id=29 op=LOAD Sep 9 00:43:42.467000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffde613358 a2=94 a3=2 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.467000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.467000 audit: BPF prog-id=29 op=UNLOAD Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit: BPF prog-id=30 op=LOAD Sep 9 00:43:42.558000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffde613318 a2=40 a3=ffffde613348 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.558000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.558000 audit: BPF prog-id=30 op=UNLOAD Sep 9 00:43:42.558000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.558000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffde613430 a2=50 a3=0 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.558000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde613388 a2=28 a3=ffffde6134b8 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde6133b8 a2=28 a3=ffffde6134e8 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde613268 a2=28 a3=ffffde613398 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde6133d8 a2=28 a3=ffffde613508 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde6133b8 a2=28 a3=ffffde6134e8 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde6133a8 a2=28 a3=ffffde6134d8 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde6133d8 a2=28 a3=ffffde613508 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde6133b8 a2=28 a3=ffffde6134e8 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde6133d8 a2=28 a3=ffffde613508 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde6133a8 a2=28 a3=ffffde6134d8 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.567000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.567000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde613428 a2=28 a3=ffffde613568 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffde613160 a2=50 a3=0 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.568000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit: BPF prog-id=31 op=LOAD Sep 9 00:43:42.568000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffde613168 a2=94 a3=5 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.568000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.568000 audit: BPF prog-id=31 op=UNLOAD Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffde613270 a2=50 a3=0 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.568000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffde6133b8 a2=4 a3=3 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.568000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { confidentiality } for pid=3769 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 9 00:43:42.568000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffde613398 a2=94 a3=6 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.568000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { confidentiality } for pid=3769 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 9 00:43:42.568000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffde612b68 a2=94 a3=83 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.568000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { perfmon } for pid=3769 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.568000 audit[3769]: AVC avc: denied { confidentiality } for pid=3769 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 9 00:43:42.568000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffde612b68 a2=94 a3=83 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.568000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.569000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.569000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffde6145a8 a2=10 a3=ffffde614698 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.569000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.569000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.569000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffde614468 a2=10 a3=ffffde614558 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.569000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.569000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.569000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffde6143d8 a2=10 a3=ffffde614558 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.569000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.569000 audit[3769]: AVC avc: denied { bpf } for pid=3769 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 9 00:43:42.569000 audit[3769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffde6143d8 a2=10 a3=ffffde614558 items=0 ppid=3688 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.569000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 9 00:43:42.575000 audit: BPF prog-id=26 op=UNLOAD Sep 9 00:43:42.640000 audit[3799]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3799 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:42.640000 audit[3799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd111fa10 a2=0 a3=ffffa62a3fa8 items=0 ppid=3688 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.640000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:42.646000 audit[3798]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3798 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:42.646000 audit[3798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffffd677380 a2=0 a3=ffff91037fa8 items=0 ppid=3688 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.646000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:42.647000 audit[3800]: NETFILTER_CFG table=filter:103 family=2 entries=94 op=nft_register_chain pid=3800 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:42.647000 audit[3800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffedba3fe0 a2=0 a3=ffffb82d9fa8 items=0 ppid=3688 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.647000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:42.662000 audit[3802]: NETFILTER_CFG table=raw:104 family=2 entries=21 op=nft_register_chain pid=3802 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:42.662000 audit[3802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe19e4f00 a2=0 a3=ffffbeb78fa8 items=0 ppid=3688 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:42.662000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:42.783385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711041098.mount: Deactivated successfully. Sep 9 00:43:42.884054 env[1317]: time="2025-09-09T00:43:42.883307580Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:42.886901 env[1317]: time="2025-09-09T00:43:42.886858613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:42.889429 env[1317]: time="2025-09-09T00:43:42.889387328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:42.891573 env[1317]: time="2025-09-09T00:43:42.891527164Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:42.891869 env[1317]: time="2025-09-09T00:43:42.891830723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 00:43:42.894741 env[1317]: time="2025-09-09T00:43:42.894700438Z" level=info msg="CreateContainer within sandbox \"ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:43:42.912337 env[1317]: time="2025-09-09T00:43:42.912248645Z" level=info msg="CreateContainer within sandbox \"ca4f8544a38f525651526dd834a0b184857932c6148be118acf5ba20131d53d2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ca7030d687ece6303b44b177000f19cc3147bec59f3e7b57b06e87fb10a6b356\"" Sep 9 00:43:42.912850 env[1317]: time="2025-09-09T00:43:42.912786084Z" level=info msg="StartContainer for \"ca7030d687ece6303b44b177000f19cc3147bec59f3e7b57b06e87fb10a6b356\"" Sep 9 00:43:42.973536 env[1317]: time="2025-09-09T00:43:42.973492089Z" level=info msg="StartContainer for \"ca7030d687ece6303b44b177000f19cc3147bec59f3e7b57b06e87fb10a6b356\" returns successfully" Sep 9 00:43:43.780199 env[1317]: time="2025-09-09T00:43:43.780148843Z" level=info msg="StopPodSandbox for \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\"" Sep 9 00:43:43.780454 env[1317]: time="2025-09-09T00:43:43.780150643Z" level=info msg="StopPodSandbox for \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\"" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.857 [INFO][3874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.857 [INFO][3874] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" iface="eth0" netns="/var/run/netns/cni-bda7307c-f3aa-e1cc-58d5-e6439d0641c3" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.858 [INFO][3874] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" iface="eth0" netns="/var/run/netns/cni-bda7307c-f3aa-e1cc-58d5-e6439d0641c3" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.858 [INFO][3874] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" iface="eth0" netns="/var/run/netns/cni-bda7307c-f3aa-e1cc-58d5-e6439d0641c3" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.858 [INFO][3874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.858 [INFO][3874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.878 [INFO][3891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" HandleID="k8s-pod-network.36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.878 [INFO][3891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.878 [INFO][3891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.890 [WARNING][3891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" HandleID="k8s-pod-network.36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.890 [INFO][3891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" HandleID="k8s-pod-network.36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.891 [INFO][3891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:43.895094 env[1317]: 2025-09-09 00:43:43.893 [INFO][3874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:43:43.899916 env[1317]: time="2025-09-09T00:43:43.895224911Z" level=info msg="TearDown network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\" successfully" Sep 9 00:43:43.899916 env[1317]: time="2025-09-09T00:43:43.895256751Z" level=info msg="StopPodSandbox for \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\" returns successfully" Sep 9 00:43:43.899916 env[1317]: time="2025-09-09T00:43:43.898162066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-v64bm,Uid:767b79c6-02cc-4919-ae65-36b5295c2cf4,Namespace:calico-system,Attempt:1,}" Sep 9 00:43:43.897383 systemd[1]: run-netns-cni\x2dbda7307c\x2df3aa\x2de1cc\x2d58d5\x2de6439d0641c3.mount: Deactivated successfully. Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.860 [INFO][3875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.860 [INFO][3875] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" iface="eth0" netns="/var/run/netns/cni-90e86523-6792-9423-d8b0-334607022840" Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.860 [INFO][3875] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" iface="eth0" netns="/var/run/netns/cni-90e86523-6792-9423-d8b0-334607022840" Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.860 [INFO][3875] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" iface="eth0" netns="/var/run/netns/cni-90e86523-6792-9423-d8b0-334607022840" Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.860 [INFO][3875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.860 [INFO][3875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.904 [INFO][3893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" HandleID="k8s-pod-network.42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.905 [INFO][3893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.905 [INFO][3893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.917 [WARNING][3893] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" HandleID="k8s-pod-network.42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.917 [INFO][3893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" HandleID="k8s-pod-network.42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.919 [INFO][3893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:43.929428 env[1317]: 2025-09-09 00:43:43.926 [INFO][3875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:43:43.932845 env[1317]: time="2025-09-09T00:43:43.929656808Z" level=info msg="TearDown network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\" successfully" Sep 9 00:43:43.932845 env[1317]: time="2025-09-09T00:43:43.929787648Z" level=info msg="StopPodSandbox for \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\" returns successfully" Sep 9 00:43:43.931687 systemd[1]: run-netns-cni\x2d90e86523\x2d6792\x2d9423\x2dd8b0\x2d334607022840.mount: Deactivated successfully. Sep 9 00:43:43.934466 kubelet[2118]: I0909 00:43:43.930711 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b66464454-t4j9n" podStartSLOduration=3.304508894 podStartE2EDuration="5.930695246s" podCreationTimestamp="2025-09-09 00:43:38 +0000 UTC" firstStartedPulling="2025-09-09 00:43:40.266996809 +0000 UTC m=+33.594983501" lastFinishedPulling="2025-09-09 00:43:42.893183121 +0000 UTC m=+36.221169853" observedRunningTime="2025-09-09 00:43:43.930471087 +0000 UTC m=+37.258457819" watchObservedRunningTime="2025-09-09 00:43:43.930695246 +0000 UTC m=+37.258681978" Sep 9 00:43:43.934789 env[1317]: time="2025-09-09T00:43:43.934173440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cdd6bdb6-td7gz,Uid:ec54d5e2-70bd-4445-9ea0-62cda1c0ae32,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:43:43.945000 audit[3921]: NETFILTER_CFG table=filter:105 family=2 entries=19 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:43.945000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc1e14f60 a2=0 a3=1 items=0 ppid=2268 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:43.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:43.951000 audit[3921]: NETFILTER_CFG table=nat:106 family=2 entries=21 op=nft_register_chain pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:43.951000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=ffffc1e14f60 a2=0 a3=1 items=0 ppid=2268 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:43.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:43.957298 systemd-networkd[1097]: vxlan.calico: Gained IPv6LL Sep 9 00:43:44.039937 systemd-networkd[1097]: cali20d018efbab: Link UP Sep 9 00:43:44.041436 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:43:44.041502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali20d018efbab: link becomes ready Sep 9 00:43:44.041622 systemd-networkd[1097]: cali20d018efbab: Gained carrier Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:43.965 [INFO][3907] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--v64bm-eth0 goldmane-7988f88666- calico-system 767b79c6-02cc-4919-ae65-36b5295c2cf4 955 0 2025-09-09 00:43:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-v64bm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali20d018efbab [] [] }} ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Namespace="calico-system" Pod="goldmane-7988f88666-v64bm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--v64bm-" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:43.965 [INFO][3907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Namespace="calico-system" Pod="goldmane-7988f88666-v64bm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:43.999 [INFO][3939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" HandleID="k8s-pod-network.e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:43.999 [INFO][3939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" HandleID="k8s-pod-network.e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b40a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-v64bm", "timestamp":"2025-09-09 00:43:43.999732759 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.000 [INFO][3939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.000 [INFO][3939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.000 [INFO][3939] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.011 [INFO][3939] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" host="localhost" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.015 [INFO][3939] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.021 [INFO][3939] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.023 [INFO][3939] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.025 [INFO][3939] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.025 [INFO][3939] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" host="localhost" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.027 [INFO][3939] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97 Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.030 [INFO][3939] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" host="localhost" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.035 [INFO][3939] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" host="localhost" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.036 [INFO][3939] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" host="localhost" Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.036 [INFO][3939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:44.060185 env[1317]: 2025-09-09 00:43:44.036 [INFO][3939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" HandleID="k8s-pod-network.e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:44.060833 env[1317]: 2025-09-09 00:43:44.038 [INFO][3907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Namespace="calico-system" Pod="goldmane-7988f88666-v64bm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--v64bm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--v64bm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"767b79c6-02cc-4919-ae65-36b5295c2cf4", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-v64bm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d018efbab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:44.060833 env[1317]: 2025-09-09 00:43:44.038 [INFO][3907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Namespace="calico-system" Pod="goldmane-7988f88666-v64bm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:44.060833 env[1317]: 2025-09-09 00:43:44.038 [INFO][3907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20d018efbab ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Namespace="calico-system" Pod="goldmane-7988f88666-v64bm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:44.060833 env[1317]: 2025-09-09 00:43:44.041 [INFO][3907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Namespace="calico-system" Pod="goldmane-7988f88666-v64bm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:44.060833 env[1317]: 2025-09-09 00:43:44.046 [INFO][3907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Namespace="calico-system" Pod="goldmane-7988f88666-v64bm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--v64bm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--v64bm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"767b79c6-02cc-4919-ae65-36b5295c2cf4", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97", Pod:"goldmane-7988f88666-v64bm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d018efbab", MAC:"c6:63:e9:c2:93:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:44.060833 env[1317]: 2025-09-09 00:43:44.056 [INFO][3907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97" Namespace="calico-system" Pod="goldmane-7988f88666-v64bm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:43:44.069000 audit[3968]: NETFILTER_CFG table=filter:107 family=2 entries=44 op=nft_register_chain pid=3968 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:44.071276 kernel: kauditd_printk_skb: 559 callbacks suppressed Sep 9 00:43:44.071344 kernel: audit: type=1325 audit(1757378624.069:401): table=filter:107 family=2 entries=44 op=nft_register_chain pid=3968 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:44.069000 audit[3968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffd9dd79a0 a2=0 a3=ffffa4132fa8 items=0 ppid=3688 pid=3968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:44.074706 env[1317]: time="2025-09-09T00:43:44.074640825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:44.074787 env[1317]: time="2025-09-09T00:43:44.074720905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:44.074787 env[1317]: time="2025-09-09T00:43:44.074748505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:44.074984 env[1317]: time="2025-09-09T00:43:44.074934705Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97 pid=3976 runtime=io.containerd.runc.v2 Sep 9 00:43:44.076006 kernel: audit: type=1300 audit(1757378624.069:401): arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffd9dd79a0 a2=0 a3=ffffa4132fa8 items=0 ppid=3688 pid=3968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:44.076058 kernel: audit: type=1327 audit(1757378624.069:401): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:44.069000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:44.101500 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:43:44.120629 env[1317]: time="2025-09-09T00:43:44.120581823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-v64bm,Uid:767b79c6-02cc-4919-ae65-36b5295c2cf4,Namespace:calico-system,Attempt:1,} returns sandbox id \"e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97\"" Sep 9 00:43:44.122736 env[1317]: time="2025-09-09T00:43:44.122662179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:43:44.146102 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic9ff5506ddb: link becomes ready Sep 9 00:43:44.146471 systemd-networkd[1097]: calic9ff5506ddb: Link UP Sep 9 00:43:44.146616 systemd-networkd[1097]: calic9ff5506ddb: Gained carrier Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:43.985 [INFO][3924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0 calico-apiserver-55cdd6bdb6- calico-apiserver ec54d5e2-70bd-4445-9ea0-62cda1c0ae32 956 0 2025-09-09 00:43:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55cdd6bdb6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55cdd6bdb6-td7gz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic9ff5506ddb [] [] }} ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-td7gz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:43.985 [INFO][3924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-td7gz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.013 [INFO][3947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" HandleID="k8s-pod-network.72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.013 [INFO][3947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" HandleID="k8s-pod-network.72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b6b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55cdd6bdb6-td7gz", "timestamp":"2025-09-09 00:43:44.013553334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.013 [INFO][3947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.036 [INFO][3947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.036 [INFO][3947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.115 [INFO][3947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" host="localhost" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.120 [INFO][3947] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.125 [INFO][3947] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.127 [INFO][3947] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.129 [INFO][3947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.129 [INFO][3947] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" host="localhost" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.130 [INFO][3947] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4 Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.133 [INFO][3947] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" host="localhost" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.140 [INFO][3947] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" host="localhost" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.140 [INFO][3947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" host="localhost" Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.140 [INFO][3947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:44.160164 env[1317]: 2025-09-09 00:43:44.140 [INFO][3947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" HandleID="k8s-pod-network.72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:44.163398 env[1317]: 2025-09-09 00:43:44.143 [INFO][3924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-td7gz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0", GenerateName:"calico-apiserver-55cdd6bdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec54d5e2-70bd-4445-9ea0-62cda1c0ae32", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cdd6bdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55cdd6bdb6-td7gz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9ff5506ddb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:44.163398 env[1317]: 2025-09-09 00:43:44.143 [INFO][3924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-td7gz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:44.163398 env[1317]: 2025-09-09 00:43:44.143 [INFO][3924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9ff5506ddb ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-td7gz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:44.163398 env[1317]: 2025-09-09 00:43:44.146 [INFO][3924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-td7gz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:44.163398 env[1317]: 2025-09-09 00:43:44.147 [INFO][3924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-td7gz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0", GenerateName:"calico-apiserver-55cdd6bdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec54d5e2-70bd-4445-9ea0-62cda1c0ae32", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cdd6bdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4", Pod:"calico-apiserver-55cdd6bdb6-td7gz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9ff5506ddb", MAC:"da:8a:79:0d:cb:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:44.163398 env[1317]: 2025-09-09 00:43:44.156 [INFO][3924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-td7gz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:43:44.171000 audit[4024]: NETFILTER_CFG table=filter:108 family=2 entries=60 op=nft_register_chain pid=4024 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:44.171000 audit[4024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32248 a0=3 a1=fffffaf43730 a2=0 a3=ffff844c4fa8 items=0 ppid=3688 pid=4024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:44.175109 env[1317]: time="2025-09-09T00:43:44.172898050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:44.175109 env[1317]: time="2025-09-09T00:43:44.172959210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:44.175109 env[1317]: time="2025-09-09T00:43:44.172969970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:44.176205 env[1317]: time="2025-09-09T00:43:44.176116444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4 pid=4025 runtime=io.containerd.runc.v2 Sep 9 00:43:44.177866 kernel: audit: type=1325 audit(1757378624.171:402): table=filter:108 family=2 entries=60 op=nft_register_chain pid=4024 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:44.177934 kernel: audit: type=1300 audit(1757378624.171:402): arch=c00000b7 syscall=211 success=yes exit=32248 a0=3 a1=fffffaf43730 a2=0 a3=ffff844c4fa8 items=0 ppid=3688 pid=4024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:44.177957 kernel: audit: type=1327 audit(1757378624.171:402): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:44.171000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:44.207737 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:43:44.227482 env[1317]: time="2025-09-09T00:43:44.227443392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cdd6bdb6-td7gz,Uid:ec54d5e2-70bd-4445-9ea0-62cda1c0ae32,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4\"" Sep 9 00:43:44.780453 env[1317]: time="2025-09-09T00:43:44.780221204Z" level=info msg="StopPodSandbox for \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\"" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.840 [INFO][4073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.840 [INFO][4073] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" iface="eth0" netns="/var/run/netns/cni-711b4684-2d3f-59f5-7b87-b96387976ea4" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.841 [INFO][4073] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" iface="eth0" netns="/var/run/netns/cni-711b4684-2d3f-59f5-7b87-b96387976ea4" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.841 [INFO][4073] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" iface="eth0" netns="/var/run/netns/cni-711b4684-2d3f-59f5-7b87-b96387976ea4" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.841 [INFO][4073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.841 [INFO][4073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.861 [INFO][4082] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" HandleID="k8s-pod-network.3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.862 [INFO][4082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.862 [INFO][4082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.870 [WARNING][4082] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" HandleID="k8s-pod-network.3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.870 [INFO][4082] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" HandleID="k8s-pod-network.3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.873 [INFO][4082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:44.878667 env[1317]: 2025-09-09 00:43:44.876 [INFO][4073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:43:44.879376 env[1317]: time="2025-09-09T00:43:44.879344067Z" level=info msg="TearDown network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\" successfully" Sep 9 00:43:44.879460 env[1317]: time="2025-09-09T00:43:44.879443067Z" level=info msg="StopPodSandbox for \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\" returns successfully" Sep 9 00:43:44.880166 env[1317]: time="2025-09-09T00:43:44.880136946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cdd6bdb6-9k5zf,Uid:53253ab8-84f6-4a5e-8e9a-c2b463038540,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:43:44.881912 systemd[1]: run-netns-cni\x2d711b4684\x2d2d3f\x2d59f5\x2d7b87\x2db96387976ea4.mount: Deactivated successfully. Sep 9 00:43:45.120313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:43:45.120442 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali128d51d5ffe: link becomes ready Sep 9 00:43:45.121918 systemd-networkd[1097]: cali128d51d5ffe: Link UP Sep 9 00:43:45.122105 systemd-networkd[1097]: cali128d51d5ffe: Gained carrier Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:44.973 [INFO][4091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0 calico-apiserver-55cdd6bdb6- calico-apiserver 53253ab8-84f6-4a5e-8e9a-c2b463038540 973 0 2025-09-09 00:43:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55cdd6bdb6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55cdd6bdb6-9k5zf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali128d51d5ffe [] [] }} ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-9k5zf" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:44.974 [INFO][4091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-9k5zf" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.025 [INFO][4108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" HandleID="k8s-pod-network.ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.025 [INFO][4108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" HandleID="k8s-pod-network.ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55cdd6bdb6-9k5zf", "timestamp":"2025-09-09 00:43:45.025032808 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.025 [INFO][4108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.025 [INFO][4108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.025 [INFO][4108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.036 [INFO][4108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" host="localhost" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.041 [INFO][4108] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.059 [INFO][4108] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.062 [INFO][4108] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.065 [INFO][4108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.065 [INFO][4108] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" host="localhost" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.073 [INFO][4108] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.080 [INFO][4108] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" host="localhost" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.102 [INFO][4108] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" host="localhost" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.102 [INFO][4108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" host="localhost" Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.102 [INFO][4108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:45.140553 env[1317]: 2025-09-09 00:43:45.106 [INFO][4108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" HandleID="k8s-pod-network.ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:45.141660 env[1317]: 2025-09-09 00:43:45.117 [INFO][4091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-9k5zf" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0", GenerateName:"calico-apiserver-55cdd6bdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"53253ab8-84f6-4a5e-8e9a-c2b463038540", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cdd6bdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55cdd6bdb6-9k5zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali128d51d5ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:45.141660 env[1317]: 2025-09-09 00:43:45.117 [INFO][4091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-9k5zf" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:45.141660 env[1317]: 2025-09-09 00:43:45.117 [INFO][4091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali128d51d5ffe ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-9k5zf" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:45.141660 env[1317]: 2025-09-09 00:43:45.120 [INFO][4091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-9k5zf" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:45.141660 env[1317]: 2025-09-09 00:43:45.121 [INFO][4091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-9k5zf" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0", GenerateName:"calico-apiserver-55cdd6bdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"53253ab8-84f6-4a5e-8e9a-c2b463038540", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cdd6bdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc", Pod:"calico-apiserver-55cdd6bdb6-9k5zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali128d51d5ffe", MAC:"1a:12:97:05:3a:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:45.141660 env[1317]: 2025-09-09 00:43:45.135 [INFO][4091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc" Namespace="calico-apiserver" Pod="calico-apiserver-55cdd6bdb6-9k5zf" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:43:45.149000 audit[4124]: NETFILTER_CFG table=filter:109 family=2 entries=41 op=nft_register_chain pid=4124 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:45.149000 audit[4124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23060 a0=3 a1=fffff5bb11c0 a2=0 a3=ffffa0804fa8 items=0 ppid=3688 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:45.155396 kernel: audit: type=1325 audit(1757378625.149:403): table=filter:109 family=2 entries=41 op=nft_register_chain pid=4124 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:45.155469 kernel: audit: type=1300 audit(1757378625.149:403): arch=c00000b7 syscall=211 success=yes exit=23060 a0=3 a1=fffff5bb11c0 a2=0 a3=ffffa0804fa8 items=0 ppid=3688 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:45.155497 kernel: audit: type=1327 audit(1757378625.149:403): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:45.149000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:45.170159 env[1317]: time="2025-09-09T00:43:45.170091356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:45.170305 env[1317]: time="2025-09-09T00:43:45.170139516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:45.170389 env[1317]: time="2025-09-09T00:43:45.170359635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:45.170609 env[1317]: time="2025-09-09T00:43:45.170573675Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc pid=4132 runtime=io.containerd.runc.v2 Sep 9 00:43:45.200588 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:43:45.224727 env[1317]: time="2025-09-09T00:43:45.224634701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cdd6bdb6-9k5zf,Uid:53253ab8-84f6-4a5e-8e9a-c2b463038540,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc\"" Sep 9 00:43:45.364275 systemd-networkd[1097]: cali20d018efbab: Gained IPv6LL Sep 9 00:43:45.676794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582046014.mount: Deactivated successfully. Sep 9 00:43:45.748169 systemd-networkd[1097]: calic9ff5506ddb: Gained IPv6LL Sep 9 00:43:45.994093 kubelet[2118]: I0909 00:43:45.993093 2118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:43:46.017962 systemd[1]: run-containerd-runc-k8s.io-2d0d2dfacbee8d568e807005ed149ffb92e30a97113c760c084c68f477849432-runc.W9Sebt.mount: Deactivated successfully. Sep 9 00:43:46.310837 env[1317]: time="2025-09-09T00:43:46.310784307Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:46.312210 env[1317]: time="2025-09-09T00:43:46.312181665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:46.314032 env[1317]: time="2025-09-09T00:43:46.313999142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:46.315593 env[1317]: time="2025-09-09T00:43:46.315569619Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:46.316108 env[1317]: time="2025-09-09T00:43:46.316085098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 00:43:46.317610 env[1317]: time="2025-09-09T00:43:46.317579096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:43:46.319155 env[1317]: time="2025-09-09T00:43:46.319123573Z" level=info msg="CreateContainer within sandbox \"e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:43:46.328919 env[1317]: time="2025-09-09T00:43:46.328878797Z" level=info msg="CreateContainer within sandbox \"e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5b4f4b13f0d5378593f8098125dac60fa47de81c772837d7a3f867dd26962037\"" Sep 9 00:43:46.329466 env[1317]: time="2025-09-09T00:43:46.329437756Z" level=info msg="StartContainer for \"5b4f4b13f0d5378593f8098125dac60fa47de81c772837d7a3f867dd26962037\"" Sep 9 00:43:46.403462 env[1317]: time="2025-09-09T00:43:46.403346071Z" level=info msg="StartContainer for \"5b4f4b13f0d5378593f8098125dac60fa47de81c772837d7a3f867dd26962037\" returns successfully" Sep 9 00:43:46.781543 env[1317]: time="2025-09-09T00:43:46.780922632Z" level=info msg="StopPodSandbox for \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\"" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.839 [INFO][4259] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.839 [INFO][4259] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" iface="eth0" netns="/var/run/netns/cni-133b9e1b-8c04-0210-d3ce-0d007a8e44a3" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.840 [INFO][4259] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" iface="eth0" netns="/var/run/netns/cni-133b9e1b-8c04-0210-d3ce-0d007a8e44a3" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.841 [INFO][4259] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" iface="eth0" netns="/var/run/netns/cni-133b9e1b-8c04-0210-d3ce-0d007a8e44a3" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.841 [INFO][4259] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.841 [INFO][4259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.865 [INFO][4268] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" HandleID="k8s-pod-network.64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.866 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.866 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.878 [WARNING][4268] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" HandleID="k8s-pod-network.64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.878 [INFO][4268] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" HandleID="k8s-pod-network.64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.880 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:46.893583 env[1317]: 2025-09-09 00:43:46.888 [INFO][4259] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:43:46.894069 env[1317]: time="2025-09-09T00:43:46.893715041Z" level=info msg="TearDown network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\" successfully" Sep 9 00:43:46.894069 env[1317]: time="2025-09-09T00:43:46.893743601Z" level=info msg="StopPodSandbox for \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\" returns successfully" Sep 9 00:43:46.894625 env[1317]: time="2025-09-09T00:43:46.894596559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-89f6f49cb-svnf4,Uid:506277be-dd46-4716-b8b9-1f3976363568,Namespace:calico-system,Attempt:1,}" Sep 9 00:43:46.959000 audit[4305]: NETFILTER_CFG table=filter:110 family=2 entries=18 op=nft_register_rule pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:46.963434 kernel: audit: type=1325 audit(1757378626.959:404): table=filter:110 family=2 entries=18 op=nft_register_rule pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:46.959000 audit[4305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd1210cd0 a2=0 a3=1 items=0 ppid=2268 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:46.959000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:46.964000 audit[4305]: NETFILTER_CFG table=nat:111 family=2 entries=16 op=nft_register_rule pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:46.964000 audit[4305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=ffffd1210cd0 a2=0 a3=1 items=0 ppid=2268 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:46.964000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:47.028647 systemd-networkd[1097]: cali6b86d719201: Link UP Sep 9 00:43:47.032463 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:43:47.032857 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6b86d719201: link becomes ready Sep 9 00:43:47.032711 systemd-networkd[1097]: cali6b86d719201: Gained carrier Sep 9 00:43:47.050796 kubelet[2118]: I0909 00:43:47.050729 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-v64bm" podStartSLOduration=19.85602702 podStartE2EDuration="22.050709377s" podCreationTimestamp="2025-09-09 00:43:25 +0000 UTC" firstStartedPulling="2025-09-09 00:43:44.1222565 +0000 UTC m=+37.450243232" lastFinishedPulling="2025-09-09 00:43:46.316938897 +0000 UTC m=+39.644925589" observedRunningTime="2025-09-09 00:43:46.954276778 +0000 UTC m=+40.282263510" watchObservedRunningTime="2025-09-09 00:43:47.050709377 +0000 UTC m=+40.378696109" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:46.940 [INFO][4277] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0 calico-kube-controllers-89f6f49cb- calico-system 506277be-dd46-4716-b8b9-1f3976363568 989 0 2025-09-09 00:43:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:89f6f49cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-89f6f49cb-svnf4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6b86d719201 [] [] }} ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Namespace="calico-system" Pod="calico-kube-controllers-89f6f49cb-svnf4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:46.940 [INFO][4277] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Namespace="calico-system" Pod="calico-kube-controllers-89f6f49cb-svnf4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:46.978 [INFO][4293] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" HandleID="k8s-pod-network.432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:46.978 [INFO][4293] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" HandleID="k8s-pod-network.432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-89f6f49cb-svnf4", "timestamp":"2025-09-09 00:43:46.978241498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:46.978 [INFO][4293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:46.978 [INFO][4293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:46.978 [INFO][4293] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:46.990 [INFO][4293] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" host="localhost" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:46.995 [INFO][4293] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.000 [INFO][4293] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.003 [INFO][4293] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.005 [INFO][4293] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.005 [INFO][4293] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" host="localhost" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.010 [INFO][4293] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.014 [INFO][4293] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" host="localhost" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.021 [INFO][4293] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" host="localhost" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.021 [INFO][4293] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" host="localhost" Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.022 [INFO][4293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:47.058940 env[1317]: 2025-09-09 00:43:47.022 [INFO][4293] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" HandleID="k8s-pod-network.432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:47.059896 env[1317]: 2025-09-09 00:43:47.024 [INFO][4277] cni-plugin/k8s.go 418: Populated endpoint ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Namespace="calico-system" Pod="calico-kube-controllers-89f6f49cb-svnf4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0", GenerateName:"calico-kube-controllers-89f6f49cb-", Namespace:"calico-system", SelfLink:"", UID:"506277be-dd46-4716-b8b9-1f3976363568", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"89f6f49cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-89f6f49cb-svnf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b86d719201", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:47.059896 env[1317]: 2025-09-09 00:43:47.024 [INFO][4277] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Namespace="calico-system" Pod="calico-kube-controllers-89f6f49cb-svnf4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:47.059896 env[1317]: 2025-09-09 00:43:47.024 [INFO][4277] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b86d719201 ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Namespace="calico-system" Pod="calico-kube-controllers-89f6f49cb-svnf4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:47.059896 env[1317]: 2025-09-09 00:43:47.030 [INFO][4277] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Namespace="calico-system" Pod="calico-kube-controllers-89f6f49cb-svnf4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:47.059896 env[1317]: 2025-09-09 00:43:47.034 [INFO][4277] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Namespace="calico-system" Pod="calico-kube-controllers-89f6f49cb-svnf4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0", GenerateName:"calico-kube-controllers-89f6f49cb-", Namespace:"calico-system", SelfLink:"", UID:"506277be-dd46-4716-b8b9-1f3976363568", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"89f6f49cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a", Pod:"calico-kube-controllers-89f6f49cb-svnf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b86d719201", MAC:"72:e7:27:63:c2:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:47.059896 env[1317]: 2025-09-09 00:43:47.052 [INFO][4277] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a" Namespace="calico-system" Pod="calico-kube-controllers-89f6f49cb-svnf4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:43:47.068000 audit[4337]: NETFILTER_CFG table=filter:112 family=2 entries=44 op=nft_register_chain pid=4337 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:47.068000 audit[4337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21936 a0=3 a1=ffffc991c780 a2=0 a3=ffffbf1e8fa8 items=0 ppid=3688 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:47.068000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:47.073803 env[1317]: time="2025-09-09T00:43:47.073715219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:47.073803 env[1317]: time="2025-09-09T00:43:47.073776059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:47.073970 env[1317]: time="2025-09-09T00:43:47.073786539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:47.074363 env[1317]: time="2025-09-09T00:43:47.074297898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a pid=4343 runtime=io.containerd.runc.v2 Sep 9 00:43:47.104302 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:43:47.127537 env[1317]: time="2025-09-09T00:43:47.127487651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-89f6f49cb-svnf4,Uid:506277be-dd46-4716-b8b9-1f3976363568,Namespace:calico-system,Attempt:1,} returns sandbox id \"432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a\"" Sep 9 00:43:47.156292 systemd-networkd[1097]: cali128d51d5ffe: Gained IPv6LL Sep 9 00:43:47.217824 systemd[1]: run-netns-cni\x2d133b9e1b\x2d8c04\x2d0210\x2dd3ce\x2d0d007a8e44a3.mount: Deactivated successfully. Sep 9 00:43:47.952218 systemd[1]: run-containerd-runc-k8s.io-5b4f4b13f0d5378593f8098125dac60fa47de81c772837d7a3f867dd26962037-runc.ANK22a.mount: Deactivated successfully. Sep 9 00:43:48.159669 env[1317]: time="2025-09-09T00:43:48.159616355Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:48.161826 env[1317]: time="2025-09-09T00:43:48.161798471Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:48.163484 env[1317]: time="2025-09-09T00:43:48.163460429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:48.164844 env[1317]: time="2025-09-09T00:43:48.164815467Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:48.165261 env[1317]: time="2025-09-09T00:43:48.165233426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 00:43:48.167060 env[1317]: time="2025-09-09T00:43:48.167022903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:43:48.167857 env[1317]: time="2025-09-09T00:43:48.167821742Z" level=info msg="CreateContainer within sandbox \"72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:43:48.180212 env[1317]: time="2025-09-09T00:43:48.180178642Z" level=info msg="CreateContainer within sandbox \"72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6b40e71072c89ef8c535fe8dc157e29dbac2a3bb330a978c5c5ff8704ad3aa93\"" Sep 9 00:43:48.181827 env[1317]: time="2025-09-09T00:43:48.181776199Z" level=info msg="StartContainer for \"6b40e71072c89ef8c535fe8dc157e29dbac2a3bb330a978c5c5ff8704ad3aa93\"" Sep 9 00:43:48.238475 env[1317]: time="2025-09-09T00:43:48.238373988Z" level=info msg="StartContainer for \"6b40e71072c89ef8c535fe8dc157e29dbac2a3bb330a978c5c5ff8704ad3aa93\" returns successfully" Sep 9 00:43:48.438522 env[1317]: time="2025-09-09T00:43:48.438484386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:48.441048 env[1317]: time="2025-09-09T00:43:48.441010742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:48.443118 env[1317]: time="2025-09-09T00:43:48.443093739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:48.445271 env[1317]: time="2025-09-09T00:43:48.445246335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:48.445909 env[1317]: time="2025-09-09T00:43:48.445882774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 00:43:48.448067 env[1317]: time="2025-09-09T00:43:48.448021371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:43:48.449278 env[1317]: time="2025-09-09T00:43:48.449247809Z" level=info msg="CreateContainer within sandbox \"ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:43:48.461963 env[1317]: time="2025-09-09T00:43:48.461925749Z" level=info msg="CreateContainer within sandbox \"ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3703a47693aaf22abeed9c22308e40c2abe8bf146fae79669e4acc0d348680e8\"" Sep 9 00:43:48.462620 env[1317]: time="2025-09-09T00:43:48.462594148Z" level=info msg="StartContainer for \"3703a47693aaf22abeed9c22308e40c2abe8bf146fae79669e4acc0d348680e8\"" Sep 9 00:43:48.500134 systemd-networkd[1097]: cali6b86d719201: Gained IPv6LL Sep 9 00:43:48.525874 env[1317]: time="2025-09-09T00:43:48.525834566Z" level=info msg="StartContainer for \"3703a47693aaf22abeed9c22308e40c2abe8bf146fae79669e4acc0d348680e8\" returns successfully" Sep 9 00:43:48.781564 env[1317]: time="2025-09-09T00:43:48.781452595Z" level=info msg="StopPodSandbox for \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\"" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.852 [INFO][4497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.852 [INFO][4497] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" iface="eth0" netns="/var/run/netns/cni-cc053fd5-76ef-0a64-e7a4-72b73d221bd4" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.852 [INFO][4497] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" iface="eth0" netns="/var/run/netns/cni-cc053fd5-76ef-0a64-e7a4-72b73d221bd4" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.852 [INFO][4497] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" iface="eth0" netns="/var/run/netns/cni-cc053fd5-76ef-0a64-e7a4-72b73d221bd4" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.852 [INFO][4497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.852 [INFO][4497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.897 [INFO][4506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" HandleID="k8s-pod-network.9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.897 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.897 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.908 [WARNING][4506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" HandleID="k8s-pod-network.9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.908 [INFO][4506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" HandleID="k8s-pod-network.9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.910 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:48.915991 env[1317]: 2025-09-09 00:43:48.914 [INFO][4497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:43:48.916581 env[1317]: time="2025-09-09T00:43:48.916548817Z" level=info msg="TearDown network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\" successfully" Sep 9 00:43:48.916674 env[1317]: time="2025-09-09T00:43:48.916658257Z" level=info msg="StopPodSandbox for \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\" returns successfully" Sep 9 00:43:48.917381 env[1317]: time="2025-09-09T00:43:48.917352456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b44f5,Uid:5fcbd175-b1d0-445a-87d8-30edc58c5294,Namespace:calico-system,Attempt:1,}" Sep 9 00:43:48.952019 kubelet[2118]: I0909 00:43:48.951103 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-9k5zf" podStartSLOduration=24.72906577 podStartE2EDuration="27.951087042s" podCreationTimestamp="2025-09-09 00:43:21 +0000 UTC" firstStartedPulling="2025-09-09 00:43:45.225801939 +0000 UTC m=+38.553788671" lastFinishedPulling="2025-09-09 00:43:48.447823211 +0000 UTC m=+41.775809943" observedRunningTime="2025-09-09 00:43:48.950927642 +0000 UTC m=+42.278914374" watchObservedRunningTime="2025-09-09 00:43:48.951087042 +0000 UTC m=+42.279073774" Sep 9 00:43:48.963000 audit[4527]: NETFILTER_CFG table=filter:113 family=2 entries=18 op=nft_register_rule pid=4527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:48.963000 audit[4527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc9cef310 a2=0 a3=1 items=0 ppid=2268 pid=4527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:48.963000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:48.970000 audit[4527]: NETFILTER_CFG table=nat:114 family=2 entries=16 op=nft_register_rule pid=4527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:48.970000 audit[4527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=ffffc9cef310 a2=0 a3=1 items=0 ppid=2268 pid=4527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:48.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:48.992000 audit[4531]: NETFILTER_CFG table=filter:115 family=2 entries=18 op=nft_register_rule pid=4531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:48.992000 audit[4531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffc3a6ad0 a2=0 a3=1 items=0 ppid=2268 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:48.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:48.996000 audit[4531]: NETFILTER_CFG table=nat:116 family=2 entries=16 op=nft_register_rule pid=4531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:48.996000 audit[4531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=fffffc3a6ad0 a2=0 a3=1 items=0 ppid=2268 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:48.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:49.070681 systemd-networkd[1097]: cali0ada2e50bd0: Link UP Sep 9 00:43:49.072773 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:43:49.072862 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0ada2e50bd0: link becomes ready Sep 9 00:43:49.072921 systemd-networkd[1097]: cali0ada2e50bd0: Gained carrier Sep 9 00:43:49.088330 kubelet[2118]: I0909 00:43:49.086219 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55cdd6bdb6-td7gz" podStartSLOduration=24.148478314 podStartE2EDuration="28.086202708s" podCreationTimestamp="2025-09-09 00:43:21 +0000 UTC" firstStartedPulling="2025-09-09 00:43:44.22860295 +0000 UTC m=+37.556589682" lastFinishedPulling="2025-09-09 00:43:48.166327344 +0000 UTC m=+41.494314076" observedRunningTime="2025-09-09 00:43:48.964141421 +0000 UTC m=+42.292128153" watchObservedRunningTime="2025-09-09 00:43:49.086202708 +0000 UTC m=+42.414189440" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.003 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b44f5-eth0 csi-node-driver- calico-system 5fcbd175-b1d0-445a-87d8-30edc58c5294 1013 0 2025-09-09 00:43:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b44f5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0ada2e50bd0 [] [] }} ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Namespace="calico-system" Pod="csi-node-driver-b44f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--b44f5-" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.003 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Namespace="calico-system" Pod="csi-node-driver-b44f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.026 [INFO][4533] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" HandleID="k8s-pod-network.d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.027 [INFO][4533] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" HandleID="k8s-pod-network.d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b44f5", "timestamp":"2025-09-09 00:43:49.026811121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.027 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.027 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.027 [INFO][4533] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.036 [INFO][4533] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" host="localhost" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.042 [INFO][4533] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.046 [INFO][4533] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.047 [INFO][4533] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.052 [INFO][4533] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.052 [INFO][4533] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" host="localhost" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.053 [INFO][4533] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0 Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.056 [INFO][4533] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" host="localhost" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.063 [INFO][4533] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" host="localhost" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.063 [INFO][4533] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" host="localhost" Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.063 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:49.091213 env[1317]: 2025-09-09 00:43:49.063 [INFO][4533] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" HandleID="k8s-pod-network.d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:49.091812 env[1317]: 2025-09-09 00:43:49.066 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Namespace="calico-system" Pod="csi-node-driver-b44f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--b44f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b44f5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fcbd175-b1d0-445a-87d8-30edc58c5294", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b44f5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ada2e50bd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:49.091812 env[1317]: 2025-09-09 00:43:49.067 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Namespace="calico-system" Pod="csi-node-driver-b44f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:49.091812 env[1317]: 2025-09-09 00:43:49.067 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ada2e50bd0 ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Namespace="calico-system" Pod="csi-node-driver-b44f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:49.091812 env[1317]: 2025-09-09 00:43:49.073 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Namespace="calico-system" Pod="csi-node-driver-b44f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:49.091812 env[1317]: 2025-09-09 00:43:49.074 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Namespace="calico-system" Pod="csi-node-driver-b44f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--b44f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b44f5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fcbd175-b1d0-445a-87d8-30edc58c5294", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0", Pod:"csi-node-driver-b44f5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ada2e50bd0", MAC:"42:be:a8:9f:3a:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:49.091812 env[1317]: 2025-09-09 00:43:49.087 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0" Namespace="calico-system" Pod="csi-node-driver-b44f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:43:49.104000 audit[4549]: NETFILTER_CFG table=filter:117 family=2 entries=48 op=nft_register_chain pid=4549 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:49.111210 kernel: kauditd_printk_skb: 20 callbacks suppressed Sep 9 00:43:49.111285 kernel: audit: type=1325 audit(1757378629.104:411): table=filter:117 family=2 entries=48 op=nft_register_chain pid=4549 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:49.111350 kernel: audit: type=1300 audit(1757378629.104:411): arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffd3d3b8d0 a2=0 a3=ffff9bc6afa8 items=0 ppid=3688 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:49.104000 audit[4549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffd3d3b8d0 a2=0 a3=ffff9bc6afa8 items=0 ppid=3688 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:49.104000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:49.114647 env[1317]: time="2025-09-09T00:43:49.114479783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:49.114647 env[1317]: time="2025-09-09T00:43:49.114539543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:49.114647 env[1317]: time="2025-09-09T00:43:49.114550263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:49.115182 env[1317]: time="2025-09-09T00:43:49.115119262Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0 pid=4558 runtime=io.containerd.runc.v2 Sep 9 00:43:49.116290 kernel: audit: type=1327 audit(1757378629.104:411): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:49.148269 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:43:49.178507 env[1317]: time="2025-09-09T00:43:49.178467443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b44f5,Uid:5fcbd175-b1d0-445a-87d8-30edc58c5294,Namespace:calico-system,Attempt:1,} returns sandbox id \"d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0\"" Sep 9 00:43:49.218092 systemd[1]: run-containerd-runc-k8s.io-3703a47693aaf22abeed9c22308e40c2abe8bf146fae79669e4acc0d348680e8-runc.TLSCOY.mount: Deactivated successfully. Sep 9 00:43:49.218231 systemd[1]: run-netns-cni\x2dcc053fd5\x2d76ef\x2d0a64\x2de7a4\x2d72b73d221bd4.mount: Deactivated successfully. Sep 9 00:43:49.780498 env[1317]: time="2025-09-09T00:43:49.780290257Z" level=info msg="StopPodSandbox for \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\"" Sep 9 00:43:49.780728 env[1317]: time="2025-09-09T00:43:49.780686257Z" level=info msg="StopPodSandbox for \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\"" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.885 [INFO][4609] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.885 [INFO][4609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" iface="eth0" netns="/var/run/netns/cni-96d776ac-3a9f-b632-3233-c771792360a0" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.885 [INFO][4609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" iface="eth0" netns="/var/run/netns/cni-96d776ac-3a9f-b632-3233-c771792360a0" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.885 [INFO][4609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" iface="eth0" netns="/var/run/netns/cni-96d776ac-3a9f-b632-3233-c771792360a0" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.885 [INFO][4609] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.885 [INFO][4609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.933 [INFO][4624] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" HandleID="k8s-pod-network.14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.933 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.934 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.944 [WARNING][4624] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" HandleID="k8s-pod-network.14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.944 [INFO][4624] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" HandleID="k8s-pod-network.14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.948 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:49.966876 env[1317]: 2025-09-09 00:43:49.958 [INFO][4609] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:43:49.966876 env[1317]: time="2025-09-09T00:43:49.963253690Z" level=info msg="TearDown network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\" successfully" Sep 9 00:43:49.966876 env[1317]: time="2025-09-09T00:43:49.963288090Z" level=info msg="StopPodSandbox for \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\" returns successfully" Sep 9 00:43:49.966160 systemd[1]: run-netns-cni\x2d96d776ac\x2d3a9f\x2db632\x2d3233\x2dc771792360a0.mount: Deactivated successfully. Sep 9 00:43:49.967434 kubelet[2118]: E0909 00:43:49.963521 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:49.970694 env[1317]: time="2025-09-09T00:43:49.969528400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2zss5,Uid:2b1d432c-5704-4859-93d8-421968ff17c6,Namespace:kube-system,Attempt:1,}" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.907 [INFO][4610] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.907 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" iface="eth0" netns="/var/run/netns/cni-fb666177-f3a1-1a11-3812-20119056f524" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.909 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" iface="eth0" netns="/var/run/netns/cni-fb666177-f3a1-1a11-3812-20119056f524" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.909 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" iface="eth0" netns="/var/run/netns/cni-fb666177-f3a1-1a11-3812-20119056f524" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.909 [INFO][4610] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.909 [INFO][4610] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.938 [INFO][4631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" HandleID="k8s-pod-network.70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.940 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.948 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.957 [WARNING][4631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" HandleID="k8s-pod-network.70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.957 [INFO][4631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" HandleID="k8s-pod-network.70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.959 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:49.972266 env[1317]: 2025-09-09 00:43:49.962 [INFO][4610] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:43:49.974621 systemd[1]: run-netns-cni\x2dfb666177\x2df3a1\x2d1a11\x2d3812\x2d20119056f524.mount: Deactivated successfully. Sep 9 00:43:49.981032 env[1317]: time="2025-09-09T00:43:49.980350463Z" level=info msg="TearDown network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\" successfully" Sep 9 00:43:49.981032 env[1317]: time="2025-09-09T00:43:49.980392303Z" level=info msg="StopPodSandbox for \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\" returns successfully" Sep 9 00:43:49.981905 kubelet[2118]: E0909 00:43:49.980660 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:49.983062 env[1317]: time="2025-09-09T00:43:49.982950419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wh2kv,Uid:10d1e4ec-cd2b-4e64-bfe4-0460fd03c044,Namespace:kube-system,Attempt:1,}" Sep 9 00:43:50.181959 systemd-networkd[1097]: cali6b374681ad9: Link UP Sep 9 00:43:50.195746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:43:50.197158 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6b374681ad9: link becomes ready Sep 9 00:43:50.197153 systemd-networkd[1097]: cali6b374681ad9: Gained carrier Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.071 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0 coredns-7c65d6cfc9- kube-system 2b1d432c-5704-4859-93d8-421968ff17c6 1027 0 2025-09-09 00:43:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-2zss5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b374681ad9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zss5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zss5-" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.071 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zss5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.121 [INFO][4671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" HandleID="k8s-pod-network.a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.121 [INFO][4671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" HandleID="k8s-pod-network.a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b1570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-2zss5", "timestamp":"2025-09-09 00:43:50.121468926 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.121 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.121 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.121 [INFO][4671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.135 [INFO][4671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" host="localhost" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.143 [INFO][4671] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.148 [INFO][4671] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.150 [INFO][4671] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.153 [INFO][4671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.153 [INFO][4671] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" host="localhost" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.157 [INFO][4671] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.164 [INFO][4671] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" host="localhost" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.173 [INFO][4671] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" host="localhost" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.173 [INFO][4671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" host="localhost" Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.173 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:50.216282 env[1317]: 2025-09-09 00:43:50.173 [INFO][4671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" HandleID="k8s-pod-network.a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:50.217111 env[1317]: 2025-09-09 00:43:50.176 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zss5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b1d432c-5704-4859-93d8-421968ff17c6", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-2zss5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b374681ad9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:50.217111 env[1317]: 2025-09-09 00:43:50.176 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zss5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:50.217111 env[1317]: 2025-09-09 00:43:50.176 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b374681ad9 ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zss5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:50.217111 env[1317]: 2025-09-09 00:43:50.197 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zss5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:50.217111 env[1317]: 2025-09-09 00:43:50.198 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zss5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b1d432c-5704-4859-93d8-421968ff17c6", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a", Pod:"coredns-7c65d6cfc9-2zss5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b374681ad9", MAC:"46:76:c1:aa:19:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:50.217111 env[1317]: 2025-09-09 00:43:50.211 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zss5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:43:50.234000 audit[4698]: NETFILTER_CFG table=filter:118 family=2 entries=64 op=nft_register_chain pid=4698 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:50.234000 audit[4698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30140 a0=3 a1=ffffcb665c70 a2=0 a3=ffffb401efa8 items=0 ppid=3688 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:50.240074 kernel: audit: type=1325 audit(1757378630.234:412): table=filter:118 family=2 entries=64 op=nft_register_chain pid=4698 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:50.240125 kernel: audit: type=1300 audit(1757378630.234:412): arch=c00000b7 syscall=211 success=yes exit=30140 a0=3 a1=ffffcb665c70 a2=0 a3=ffffb401efa8 items=0 ppid=3688 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:50.234000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:50.242686 kernel: audit: type=1327 audit(1757378630.234:412): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:50.287266 env[1317]: time="2025-09-09T00:43:50.287194271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:50.287667 env[1317]: time="2025-09-09T00:43:50.287639191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:50.287891 env[1317]: time="2025-09-09T00:43:50.287821630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:50.288155 env[1317]: time="2025-09-09T00:43:50.288114830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a pid=4708 runtime=io.containerd.runc.v2 Sep 9 00:43:50.318413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1f730981b27: link becomes ready Sep 9 00:43:50.311836 systemd-networkd[1097]: cali1f730981b27: Link UP Sep 9 00:43:50.312052 systemd-networkd[1097]: cali1f730981b27: Gained carrier Sep 9 00:43:50.316783 systemd[1]: run-containerd-runc-k8s.io-a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a-runc.4EC4wA.mount: Deactivated successfully. Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.095 [INFO][4647] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0 coredns-7c65d6cfc9- kube-system 10d1e4ec-cd2b-4e64-bfe4-0460fd03c044 1028 0 2025-09-09 00:43:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-wh2kv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f730981b27 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wh2kv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wh2kv-" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.095 [INFO][4647] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wh2kv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.156 [INFO][4680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" HandleID="k8s-pod-network.65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.156 [INFO][4680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" HandleID="k8s-pod-network.65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-wh2kv", "timestamp":"2025-09-09 00:43:50.156706992 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.157 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.173 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.173 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.240 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" host="localhost" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.271 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.275 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.280 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.282 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.282 [INFO][4680] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" host="localhost" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.285 [INFO][4680] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854 Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.289 [INFO][4680] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" host="localhost" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.298 [INFO][4680] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" host="localhost" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.298 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" host="localhost" Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.298 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:43:50.322099 env[1317]: 2025-09-09 00:43:50.298 [INFO][4680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" HandleID="k8s-pod-network.65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:50.322718 env[1317]: 2025-09-09 00:43:50.302 [INFO][4647] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wh2kv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"10d1e4ec-cd2b-4e64-bfe4-0460fd03c044", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-wh2kv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f730981b27", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:50.322718 env[1317]: 2025-09-09 00:43:50.302 [INFO][4647] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wh2kv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:50.322718 env[1317]: 2025-09-09 00:43:50.302 [INFO][4647] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f730981b27 ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wh2kv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:50.322718 env[1317]: 2025-09-09 00:43:50.310 [INFO][4647] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wh2kv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:50.322718 env[1317]: 2025-09-09 00:43:50.310 [INFO][4647] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wh2kv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"10d1e4ec-cd2b-4e64-bfe4-0460fd03c044", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854", Pod:"coredns-7c65d6cfc9-wh2kv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f730981b27", MAC:"0a:4a:80:36:8b:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:43:50.322718 env[1317]: 2025-09-09 00:43:50.319 [INFO][4647] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wh2kv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:43:50.345132 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:43:50.350000 audit[4745]: NETFILTER_CFG table=filter:119 family=2 entries=54 op=nft_register_chain pid=4745 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:50.350000 audit[4745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25540 a0=3 a1=ffffce5f52a0 a2=0 a3=ffffaa077fa8 items=0 ppid=3688 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:50.357308 kernel: audit: type=1325 audit(1757378630.350:413): table=filter:119 family=2 entries=54 op=nft_register_chain pid=4745 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 9 00:43:50.357379 kernel: audit: type=1300 audit(1757378630.350:413): arch=c00000b7 syscall=211 success=yes exit=25540 a0=3 a1=ffffce5f52a0 a2=0 a3=ffffaa077fa8 items=0 ppid=3688 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:50.357444 env[1317]: time="2025-09-09T00:43:50.357238844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:43:50.357444 env[1317]: time="2025-09-09T00:43:50.357279324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:43:50.357444 env[1317]: time="2025-09-09T00:43:50.357289044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:43:50.350000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:50.357875 env[1317]: time="2025-09-09T00:43:50.357817123Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854 pid=4751 runtime=io.containerd.runc.v2 Sep 9 00:43:50.359255 kernel: audit: type=1327 audit(1757378630.350:413): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 9 00:43:50.376117 env[1317]: time="2025-09-09T00:43:50.376070775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2zss5,Uid:2b1d432c-5704-4859-93d8-421968ff17c6,Namespace:kube-system,Attempt:1,} returns sandbox id \"a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a\"" Sep 9 00:43:50.378476 kubelet[2118]: E0909 00:43:50.378442 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:50.380898 env[1317]: time="2025-09-09T00:43:50.380866968Z" level=info msg="CreateContainer within sandbox \"a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:43:50.390395 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:43:50.403715 env[1317]: time="2025-09-09T00:43:50.403257213Z" level=info msg="CreateContainer within sandbox \"a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"498a93ff377ddcd262d1fd6a1567951d910ba4c7453a9cbc8ba4bbc589f890f9\"" Sep 9 00:43:50.408249 env[1317]: time="2025-09-09T00:43:50.407744086Z" level=info msg="StartContainer for \"498a93ff377ddcd262d1fd6a1567951d910ba4c7453a9cbc8ba4bbc589f890f9\"" Sep 9 00:43:50.437699 env[1317]: time="2025-09-09T00:43:50.437601080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wh2kv,Uid:10d1e4ec-cd2b-4e64-bfe4-0460fd03c044,Namespace:kube-system,Attempt:1,} returns sandbox id \"65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854\"" Sep 9 00:43:50.445118 kubelet[2118]: E0909 00:43:50.439586 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:50.445225 env[1317]: time="2025-09-09T00:43:50.441941194Z" level=info msg="CreateContainer within sandbox \"65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:43:50.457350 env[1317]: time="2025-09-09T00:43:50.457307530Z" level=info msg="CreateContainer within sandbox \"65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7eabc6f36dc57d2df00bc30ebbba64a2a5edcc2a4276cffcdf6b4cc54066cd5f\"" Sep 9 00:43:50.460341 env[1317]: time="2025-09-09T00:43:50.460213046Z" level=info msg="StartContainer for \"7eabc6f36dc57d2df00bc30ebbba64a2a5edcc2a4276cffcdf6b4cc54066cd5f\"" Sep 9 00:43:50.506229 env[1317]: time="2025-09-09T00:43:50.506158655Z" level=info msg="StartContainer for \"498a93ff377ddcd262d1fd6a1567951d910ba4c7453a9cbc8ba4bbc589f890f9\" returns successfully" Sep 9 00:43:50.580455 env[1317]: time="2025-09-09T00:43:50.579642902Z" level=info msg="StartContainer for \"7eabc6f36dc57d2df00bc30ebbba64a2a5edcc2a4276cffcdf6b4cc54066cd5f\" returns successfully" Sep 9 00:43:50.608000 audit[4859]: NETFILTER_CFG table=filter:120 family=2 entries=17 op=nft_register_rule pid=4859 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:50.608000 audit[4859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc5497e70 a2=0 a3=1 items=0 ppid=2268 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:50.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:50.611998 kernel: audit: type=1325 audit(1757378630.608:414): table=filter:120 family=2 entries=17 op=nft_register_rule pid=4859 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:50.614000 audit[4859]: NETFILTER_CFG table=nat:121 family=2 entries=23 op=nft_register_chain pid=4859 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:50.614000 audit[4859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7812 a0=3 a1=ffffc5497e70 a2=0 a3=1 items=0 ppid=2268 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:50.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:50.932211 systemd-networkd[1097]: cali0ada2e50bd0: Gained IPv6LL Sep 9 00:43:50.951496 kubelet[2118]: E0909 00:43:50.951456 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:50.953664 kubelet[2118]: E0909 00:43:50.953626 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:50.956298 env[1317]: time="2025-09-09T00:43:50.956261164Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:50.960278 env[1317]: time="2025-09-09T00:43:50.960236598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:50.964810 env[1317]: time="2025-09-09T00:43:50.964775551Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:50.967162 env[1317]: time="2025-09-09T00:43:50.967126188Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:50.967932 env[1317]: time="2025-09-09T00:43:50.967892346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 00:43:50.969584 env[1317]: time="2025-09-09T00:43:50.969547104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:43:50.978138 env[1317]: time="2025-09-09T00:43:50.978094251Z" level=info msg="CreateContainer within sandbox \"432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:43:50.990093 kubelet[2118]: I0909 00:43:50.990033 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wh2kv" podStartSLOduration=37.989968073 podStartE2EDuration="37.989968073s" podCreationTimestamp="2025-09-09 00:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:43:50.976399493 +0000 UTC m=+44.304386185" watchObservedRunningTime="2025-09-09 00:43:50.989968073 +0000 UTC m=+44.317954805" Sep 9 00:43:50.997882 env[1317]: time="2025-09-09T00:43:50.997831100Z" level=info msg="CreateContainer within sandbox \"432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8e189121ce2a587ef2f8c5adc0abdf581cd4a78c335df06023c7450172bfbe9a\"" Sep 9 00:43:50.998559 env[1317]: time="2025-09-09T00:43:50.998521939Z" level=info msg="StartContainer for \"8e189121ce2a587ef2f8c5adc0abdf581cd4a78c335df06023c7450172bfbe9a\"" Sep 9 00:43:51.004000 audit[4872]: NETFILTER_CFG table=filter:122 family=2 entries=16 op=nft_register_rule pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:51.004000 audit[4872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffceca0f60 a2=0 a3=1 items=0 ppid=2268 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:51.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:51.009570 kubelet[2118]: I0909 00:43:51.009512 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2zss5" podStartSLOduration=38.009497603 podStartE2EDuration="38.009497603s" podCreationTimestamp="2025-09-09 00:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:43:50.990649591 +0000 UTC m=+44.318636323" watchObservedRunningTime="2025-09-09 00:43:51.009497603 +0000 UTC m=+44.337484335" Sep 9 00:43:51.024000 audit[4872]: NETFILTER_CFG table=nat:123 family=2 entries=30 op=nft_register_chain pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:51.024000 audit[4872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9700 a0=3 a1=ffffceca0f60 a2=0 a3=1 items=0 ppid=2268 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:51.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:51.082386 env[1317]: time="2025-09-09T00:43:51.082333213Z" level=info msg="StartContainer for \"8e189121ce2a587ef2f8c5adc0abdf581cd4a78c335df06023c7450172bfbe9a\" returns successfully" Sep 9 00:43:51.154507 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:40440.service. Sep 9 00:43:51.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.119:22-10.0.0.1:40440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:51.206000 audit[4918]: USER_ACCT pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:51.207938 sshd[4918]: Accepted publickey for core from 10.0.0.1 port 40440 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:43:51.208000 audit[4918]: CRED_ACQ pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:51.208000 audit[4918]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe884b8b0 a2=3 a3=1 items=0 ppid=1 pid=4918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:51.208000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:43:51.209901 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:43:51.214058 systemd-logind[1299]: New session 8 of user core. Sep 9 00:43:51.214660 systemd[1]: Started session-8.scope. Sep 9 00:43:51.230000 audit[4918]: USER_START pid=4918 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:51.232000 audit[4922]: CRED_ACQ pid=4922 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:51.678620 sshd[4918]: pam_unix(sshd:session): session closed for user core Sep 9 00:43:51.678000 audit[4918]: USER_END pid=4918 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:51.678000 audit[4918]: CRED_DISP pid=4918 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:51.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.119:22-10.0.0.1:40440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:51.681256 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:40440.service: Deactivated successfully. Sep 9 00:43:51.682767 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:43:51.683283 systemd-logind[1299]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:43:51.684001 systemd-logind[1299]: Removed session 8. Sep 9 00:43:51.892120 systemd-networkd[1097]: cali1f730981b27: Gained IPv6LL Sep 9 00:43:51.956925 kubelet[2118]: E0909 00:43:51.956833 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:51.960028 kubelet[2118]: E0909 00:43:51.959959 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:51.976837 kubelet[2118]: I0909 00:43:51.976764 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-89f6f49cb-svnf4" podStartSLOduration=23.136616894 podStartE2EDuration="26.97672299s" podCreationTimestamp="2025-09-09 00:43:25 +0000 UTC" firstStartedPulling="2025-09-09 00:43:47.128656529 +0000 UTC m=+40.456643261" lastFinishedPulling="2025-09-09 00:43:50.968762625 +0000 UTC m=+44.296749357" observedRunningTime="2025-09-09 00:43:51.976257631 +0000 UTC m=+45.304244363" watchObservedRunningTime="2025-09-09 00:43:51.97672299 +0000 UTC m=+45.304709722" Sep 9 00:43:52.006139 systemd[1]: run-containerd-runc-k8s.io-8e189121ce2a587ef2f8c5adc0abdf581cd4a78c335df06023c7450172bfbe9a-runc.xMkETM.mount: Deactivated successfully. Sep 9 00:43:52.021172 systemd-networkd[1097]: cali6b374681ad9: Gained IPv6LL Sep 9 00:43:52.059000 audit[4958]: NETFILTER_CFG table=filter:124 family=2 entries=13 op=nft_register_rule pid=4958 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:52.059000 audit[4958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffffd0ac450 a2=0 a3=1 items=0 ppid=2268 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:52.059000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:52.067000 audit[4958]: NETFILTER_CFG table=nat:125 family=2 entries=51 op=nft_register_chain pid=4958 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:43:52.067000 audit[4958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21396 a0=3 a1=fffffd0ac450 a2=0 a3=1 items=0 ppid=2268 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:52.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:43:52.072727 env[1317]: time="2025-09-09T00:43:52.072687168Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:52.074558 env[1317]: time="2025-09-09T00:43:52.074521206Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:52.077284 env[1317]: time="2025-09-09T00:43:52.077246482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:52.078767 env[1317]: time="2025-09-09T00:43:52.078735399Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:52.079444 env[1317]: time="2025-09-09T00:43:52.079411398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 00:43:52.082958 env[1317]: time="2025-09-09T00:43:52.082926313Z" level=info msg="CreateContainer within sandbox \"d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:43:52.097593 env[1317]: time="2025-09-09T00:43:52.097550012Z" level=info msg="CreateContainer within sandbox \"d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2c5f4e5620e6a7f4572c8cdee4e727e25ef4ef2bf12793498442b71e3b22664f\"" Sep 9 00:43:52.098098 env[1317]: time="2025-09-09T00:43:52.098070811Z" level=info msg="StartContainer for \"2c5f4e5620e6a7f4572c8cdee4e727e25ef4ef2bf12793498442b71e3b22664f\"" Sep 9 00:43:52.162002 env[1317]: time="2025-09-09T00:43:52.161948397Z" level=info msg="StartContainer for \"2c5f4e5620e6a7f4572c8cdee4e727e25ef4ef2bf12793498442b71e3b22664f\" returns successfully" Sep 9 00:43:52.163240 env[1317]: time="2025-09-09T00:43:52.163209275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:43:52.960440 kubelet[2118]: E0909 00:43:52.960400 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:52.960864 kubelet[2118]: E0909 00:43:52.960807 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:43:53.191129 env[1317]: time="2025-09-09T00:43:53.191066449Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:53.193056 env[1317]: time="2025-09-09T00:43:53.193017406Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:53.194955 env[1317]: time="2025-09-09T00:43:53.194916884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:53.196060 env[1317]: time="2025-09-09T00:43:53.196033922Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:43:53.196493 env[1317]: time="2025-09-09T00:43:53.196468561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 00:43:53.199347 env[1317]: time="2025-09-09T00:43:53.199250837Z" level=info msg="CreateContainer within sandbox \"d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:43:53.216744 env[1317]: time="2025-09-09T00:43:53.216650372Z" level=info msg="CreateContainer within sandbox \"d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"30c73432926016acf5218d3c4edd9cd67db63fba9549d707374f58339f950b14\"" Sep 9 00:43:53.217333 env[1317]: time="2025-09-09T00:43:53.217306291Z" level=info msg="StartContainer for \"30c73432926016acf5218d3c4edd9cd67db63fba9549d707374f58339f950b14\"" Sep 9 00:43:53.251575 systemd[1]: run-containerd-runc-k8s.io-30c73432926016acf5218d3c4edd9cd67db63fba9549d707374f58339f950b14-runc.X2GFLW.mount: Deactivated successfully. Sep 9 00:43:53.294519 env[1317]: time="2025-09-09T00:43:53.294475300Z" level=info msg="StartContainer for \"30c73432926016acf5218d3c4edd9cd67db63fba9549d707374f58339f950b14\" returns successfully" Sep 9 00:43:53.864351 kubelet[2118]: I0909 00:43:53.864278 2118 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:43:53.866595 kubelet[2118]: I0909 00:43:53.866562 2118 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:43:56.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.119:22-10.0.0.1:40448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:56.681886 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:40448.service. Sep 9 00:43:56.684908 kernel: kauditd_printk_skb: 28 callbacks suppressed Sep 9 00:43:56.685005 kernel: audit: type=1130 audit(1757378636.681:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.119:22-10.0.0.1:40448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:56.741000 audit[5042]: USER_ACCT pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.742604 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 40448 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:43:56.744433 sshd[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:43:56.743000 audit[5042]: CRED_ACQ pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.748086 kernel: audit: type=1101 audit(1757378636.741:430): pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.748148 kernel: audit: type=1103 audit(1757378636.743:431): pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.748174 kernel: audit: type=1006 audit(1757378636.743:432): pid=5042 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 9 00:43:56.749488 kernel: audit: type=1300 audit(1757378636.743:432): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8b8cd70 a2=3 a3=1 items=0 ppid=1 pid=5042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:56.743000 audit[5042]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8b8cd70 a2=3 a3=1 items=0 ppid=1 pid=5042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:43:56.749322 systemd[1]: Started session-9.scope. Sep 9 00:43:56.749905 systemd-logind[1299]: New session 9 of user core. Sep 9 00:43:56.751879 kernel: audit: type=1327 audit(1757378636.743:432): proctitle=737368643A20636F7265205B707269765D Sep 9 00:43:56.743000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:43:56.753000 audit[5042]: USER_START pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.755000 audit[5045]: CRED_ACQ pid=5045 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.760663 kernel: audit: type=1105 audit(1757378636.753:433): pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.760752 kernel: audit: type=1103 audit(1757378636.755:434): pid=5045 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.990552 sshd[5042]: pam_unix(sshd:session): session closed for user core Sep 9 00:43:56.991000 audit[5042]: USER_END pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.994576 systemd-logind[1299]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:43:56.995074 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:40448.service: Deactivated successfully. Sep 9 00:43:56.995844 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:43:56.991000 audit[5042]: CRED_DISP pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.999173 kernel: audit: type=1106 audit(1757378636.991:435): pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.999252 kernel: audit: type=1104 audit(1757378636.991:436): pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:43:56.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.119:22-10.0.0.1:40448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:43:56.999909 systemd-logind[1299]: Removed session 9. Sep 9 00:44:01.992320 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:33460.service. Sep 9 00:44:01.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.119:22-10.0.0.1:33460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:01.993483 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 9 00:44:01.993564 kernel: audit: type=1130 audit(1757378641.991:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.119:22-10.0.0.1:33460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:02.035000 audit[5059]: USER_ACCT pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.036319 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 33460 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:02.038451 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:02.037000 audit[5059]: CRED_ACQ pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.041242 kernel: audit: type=1101 audit(1757378642.035:439): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.041662 kernel: audit: type=1103 audit(1757378642.037:440): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.041707 kernel: audit: type=1006 audit(1757378642.037:441): pid=5059 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 9 00:44:02.037000 audit[5059]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd8f0100 a2=3 a3=1 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:02.044663 systemd-logind[1299]: New session 10 of user core. Sep 9 00:44:02.046042 kernel: audit: type=1300 audit(1757378642.037:441): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd8f0100 a2=3 a3=1 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:02.046125 kernel: audit: type=1327 audit(1757378642.037:441): proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:02.037000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:02.045835 systemd[1]: Started session-10.scope. Sep 9 00:44:02.050000 audit[5059]: USER_START pid=5059 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.052000 audit[5062]: CRED_ACQ pid=5062 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.058674 kernel: audit: type=1105 audit(1757378642.050:442): pid=5059 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.058775 kernel: audit: type=1103 audit(1757378642.052:443): pid=5062 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.221338 sshd[5059]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:02.222000 audit[5059]: USER_END pid=5059 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.223794 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:33468.service. Sep 9 00:44:02.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.119:22-10.0.0.1:33468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:02.228743 kernel: audit: type=1106 audit(1757378642.222:444): pid=5059 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.228822 kernel: audit: type=1130 audit(1757378642.225:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.119:22-10.0.0.1:33468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:02.225000 audit[5059]: CRED_DISP pid=5059 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.119:22-10.0.0.1:33460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:02.231310 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:33460.service: Deactivated successfully. Sep 9 00:44:02.233059 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:44:02.233591 systemd-logind[1299]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:44:02.234893 systemd-logind[1299]: Removed session 10. Sep 9 00:44:02.268000 audit[5073]: USER_ACCT pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.270086 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 33468 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:02.269000 audit[5073]: CRED_ACQ pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.270000 audit[5073]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee314e00 a2=3 a3=1 items=0 ppid=1 pid=5073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:02.270000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:02.271337 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:02.275604 systemd[1]: Started session-11.scope. Sep 9 00:44:02.276131 systemd-logind[1299]: New session 11 of user core. Sep 9 00:44:02.280000 audit[5073]: USER_START pid=5073 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.282000 audit[5078]: CRED_ACQ pid=5078 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.470597 sshd[5073]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:02.473000 audit[5073]: USER_END pid=5073 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.473000 audit[5073]: CRED_DISP pid=5073 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.119:22-10.0.0.1:33482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:02.474689 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:33482.service. Sep 9 00:44:02.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.119:22-10.0.0.1:33468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:02.482300 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:33468.service: Deactivated successfully. Sep 9 00:44:02.485170 systemd-logind[1299]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:44:02.485218 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:44:02.492574 systemd-logind[1299]: Removed session 11. Sep 9 00:44:02.527000 audit[5085]: USER_ACCT pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.528423 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 33482 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:02.528000 audit[5085]: CRED_ACQ pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.528000 audit[5085]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc746ca0 a2=3 a3=1 items=0 ppid=1 pid=5085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:02.528000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:02.530200 sshd[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:02.533906 systemd-logind[1299]: New session 12 of user core. Sep 9 00:44:02.534750 systemd[1]: Started session-12.scope. Sep 9 00:44:02.536000 audit[5085]: USER_START pid=5085 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.538000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.672458 sshd[5085]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:02.672000 audit[5085]: USER_END pid=5085 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.672000 audit[5085]: CRED_DISP pid=5085 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:02.674877 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:33482.service: Deactivated successfully. Sep 9 00:44:02.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.119:22-10.0.0.1:33482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:02.675903 systemd-logind[1299]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:44:02.675968 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:44:02.676737 systemd-logind[1299]: Removed session 12. Sep 9 00:44:03.979574 systemd[1]: run-containerd-runc-k8s.io-5b4f4b13f0d5378593f8098125dac60fa47de81c772837d7a3f867dd26962037-runc.82w1kb.mount: Deactivated successfully. Sep 9 00:44:04.078595 kubelet[2118]: I0909 00:44:04.077112 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b44f5" podStartSLOduration=35.059408315 podStartE2EDuration="39.077093835s" podCreationTimestamp="2025-09-09 00:43:25 +0000 UTC" firstStartedPulling="2025-09-09 00:43:49.18001436 +0000 UTC m=+42.508001092" lastFinishedPulling="2025-09-09 00:43:53.19769988 +0000 UTC m=+46.525686612" observedRunningTime="2025-09-09 00:43:53.994573731 +0000 UTC m=+47.322560463" watchObservedRunningTime="2025-09-09 00:44:04.077093835 +0000 UTC m=+57.405080567" Sep 9 00:44:04.098000 audit[5129]: NETFILTER_CFG table=filter:126 family=2 entries=9 op=nft_register_rule pid=5129 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:04.098000 audit[5129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffd6833f90 a2=0 a3=1 items=0 ppid=2268 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:04.098000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:04.108000 audit[5129]: NETFILTER_CFG table=nat:127 family=2 entries=31 op=nft_register_chain pid=5129 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:04.108000 audit[5129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffd6833f90 a2=0 a3=1 items=0 ppid=2268 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:04.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:06.764496 env[1317]: time="2025-09-09T00:44:06.764454755Z" level=info msg="StopPodSandbox for \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\"" Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.811 [WARNING][5145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--v64bm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"767b79c6-02cc-4919-ae65-36b5295c2cf4", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97", Pod:"goldmane-7988f88666-v64bm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d018efbab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.811 [INFO][5145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.811 [INFO][5145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" iface="eth0" netns="" Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.811 [INFO][5145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.811 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.857 [INFO][5156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" HandleID="k8s-pod-network.36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.858 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.858 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.866 [WARNING][5156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" HandleID="k8s-pod-network.36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.867 [INFO][5156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" HandleID="k8s-pod-network.36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.868 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:06.877281 env[1317]: 2025-09-09 00:44:06.870 [INFO][5145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:44:06.880116 env[1317]: time="2025-09-09T00:44:06.880075537Z" level=info msg="TearDown network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\" successfully" Sep 9 00:44:06.880226 env[1317]: time="2025-09-09T00:44:06.880207857Z" level=info msg="StopPodSandbox for \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\" returns successfully" Sep 9 00:44:06.880901 env[1317]: time="2025-09-09T00:44:06.880867576Z" level=info msg="RemovePodSandbox for \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\"" Sep 9 00:44:06.880971 env[1317]: time="2025-09-09T00:44:06.880913016Z" level=info msg="Forcibly stopping sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\"" Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.927 [WARNING][5174] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--v64bm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"767b79c6-02cc-4919-ae65-36b5295c2cf4", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2838aecfa34d127deeaf452382b466a02223282dc0c6aaf5926861d23f35e97", Pod:"goldmane-7988f88666-v64bm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d018efbab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.927 [INFO][5174] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.927 [INFO][5174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" iface="eth0" netns="" Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.927 [INFO][5174] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.927 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.953 [INFO][5183] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" HandleID="k8s-pod-network.36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.953 [INFO][5183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.953 [INFO][5183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.962 [WARNING][5183] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" HandleID="k8s-pod-network.36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.962 [INFO][5183] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" HandleID="k8s-pod-network.36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Workload="localhost-k8s-goldmane--7988f88666--v64bm-eth0" Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.964 [INFO][5183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:06.975884 env[1317]: 2025-09-09 00:44:06.972 [INFO][5174] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2" Sep 9 00:44:06.976739 env[1317]: time="2025-09-09T00:44:06.975911703Z" level=info msg="TearDown network for sandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\" successfully" Sep 9 00:44:06.990121 env[1317]: time="2025-09-09T00:44:06.990068166Z" level=info msg="RemovePodSandbox \"36f1c5a3f6f3311823396f3e21050dbc4b15a90429e3fcd9c0b69c8b0572add2\" returns successfully" Sep 9 00:44:06.990694 env[1317]: time="2025-09-09T00:44:06.990663125Z" level=info msg="StopPodSandbox for \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\"" Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.036 [WARNING][5203] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0", GenerateName:"calico-apiserver-55cdd6bdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec54d5e2-70bd-4445-9ea0-62cda1c0ae32", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cdd6bdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4", Pod:"calico-apiserver-55cdd6bdb6-td7gz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9ff5506ddb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.036 [INFO][5203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.036 [INFO][5203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" iface="eth0" netns="" Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.036 [INFO][5203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.037 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.056 [INFO][5212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" HandleID="k8s-pod-network.42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.056 [INFO][5212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.056 [INFO][5212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.067 [WARNING][5212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" HandleID="k8s-pod-network.42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.067 [INFO][5212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" HandleID="k8s-pod-network.42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.069 [INFO][5212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.073375 env[1317]: 2025-09-09 00:44:07.071 [INFO][5203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:44:07.073910 env[1317]: time="2025-09-09T00:44:07.073877187Z" level=info msg="TearDown network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\" successfully" Sep 9 00:44:07.074101 env[1317]: time="2025-09-09T00:44:07.074034987Z" level=info msg="StopPodSandbox for \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\" returns successfully" Sep 9 00:44:07.075427 env[1317]: time="2025-09-09T00:44:07.075396185Z" level=info msg="RemovePodSandbox for \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\"" Sep 9 00:44:07.075600 env[1317]: time="2025-09-09T00:44:07.075546945Z" level=info msg="Forcibly stopping sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\"" Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.114 [WARNING][5231] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0", GenerateName:"calico-apiserver-55cdd6bdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec54d5e2-70bd-4445-9ea0-62cda1c0ae32", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cdd6bdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72c460ea5e8df0e754eb56fd4c7850eb8ec7afd28f1343fa5d9bc376f510e1f4", Pod:"calico-apiserver-55cdd6bdb6-td7gz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9ff5506ddb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.114 [INFO][5231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.114 [INFO][5231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" iface="eth0" netns="" Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.114 [INFO][5231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.114 [INFO][5231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.132 [INFO][5239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" HandleID="k8s-pod-network.42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.132 [INFO][5239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.132 [INFO][5239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.141 [WARNING][5239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" HandleID="k8s-pod-network.42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.141 [INFO][5239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" HandleID="k8s-pod-network.42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--td7gz-eth0" Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.142 [INFO][5239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.145862 env[1317]: 2025-09-09 00:44:07.144 [INFO][5231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a" Sep 9 00:44:07.146497 env[1317]: time="2025-09-09T00:44:07.146449101Z" level=info msg="TearDown network for sandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\" successfully" Sep 9 00:44:07.149716 env[1317]: time="2025-09-09T00:44:07.149677657Z" level=info msg="RemovePodSandbox \"42560377b0c187d6e606a5820f8be0a900eba00f7499c74159b4ab9bae180f5a\" returns successfully" Sep 9 00:44:07.150376 env[1317]: time="2025-09-09T00:44:07.150351537Z" level=info msg="StopPodSandbox for \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\"" Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.183 [WARNING][5257] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"10d1e4ec-cd2b-4e64-bfe4-0460fd03c044", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854", Pod:"coredns-7c65d6cfc9-wh2kv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f730981b27", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.183 [INFO][5257] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.183 [INFO][5257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" iface="eth0" netns="" Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.183 [INFO][5257] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.183 [INFO][5257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.202 [INFO][5266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" HandleID="k8s-pod-network.70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.202 [INFO][5266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.202 [INFO][5266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.212 [WARNING][5266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" HandleID="k8s-pod-network.70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.212 [INFO][5266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" HandleID="k8s-pod-network.70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.214 [INFO][5266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.217582 env[1317]: 2025-09-09 00:44:07.216 [INFO][5257] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:44:07.218172 env[1317]: time="2025-09-09T00:44:07.218136977Z" level=info msg="TearDown network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\" successfully" Sep 9 00:44:07.218238 env[1317]: time="2025-09-09T00:44:07.218223697Z" level=info msg="StopPodSandbox for \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\" returns successfully" Sep 9 00:44:07.218719 env[1317]: time="2025-09-09T00:44:07.218697736Z" level=info msg="RemovePodSandbox for \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\"" Sep 9 00:44:07.218881 env[1317]: time="2025-09-09T00:44:07.218843856Z" level=info msg="Forcibly stopping sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\"" Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.254 [WARNING][5283] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"10d1e4ec-cd2b-4e64-bfe4-0460fd03c044", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65a991b2dfe7573a8b186c96fba165d76e9dd5b0a6bb2251e2657d7a8e1b6854", Pod:"coredns-7c65d6cfc9-wh2kv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f730981b27", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.254 [INFO][5283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.254 [INFO][5283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" iface="eth0" netns="" Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.254 [INFO][5283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.254 [INFO][5283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.293 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" HandleID="k8s-pod-network.70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.293 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.293 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.303 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" HandleID="k8s-pod-network.70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.303 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" HandleID="k8s-pod-network.70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Workload="localhost-k8s-coredns--7c65d6cfc9--wh2kv-eth0" Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.304 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.308662 env[1317]: 2025-09-09 00:44:07.307 [INFO][5283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775" Sep 9 00:44:07.309397 env[1317]: time="2025-09-09T00:44:07.309308949Z" level=info msg="TearDown network for sandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\" successfully" Sep 9 00:44:07.313175 env[1317]: time="2025-09-09T00:44:07.313139345Z" level=info msg="RemovePodSandbox \"70852b9bb277dd459b6d557d5e7a0b6299e14d6aef9834827eb09dc2ee0ca775\" returns successfully" Sep 9 00:44:07.314341 env[1317]: time="2025-09-09T00:44:07.314307823Z" level=info msg="StopPodSandbox for \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\"" Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.346 [WARNING][5309] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b1d432c-5704-4859-93d8-421968ff17c6", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a", Pod:"coredns-7c65d6cfc9-2zss5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b374681ad9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.347 [INFO][5309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.347 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" iface="eth0" netns="" Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.347 [INFO][5309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.347 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.365 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" HandleID="k8s-pod-network.14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.366 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.366 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.374 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" HandleID="k8s-pod-network.14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.374 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" HandleID="k8s-pod-network.14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.375 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.379261 env[1317]: 2025-09-09 00:44:07.377 [INFO][5309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:44:07.379782 env[1317]: time="2025-09-09T00:44:07.379747626Z" level=info msg="TearDown network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\" successfully" Sep 9 00:44:07.379860 env[1317]: time="2025-09-09T00:44:07.379844586Z" level=info msg="StopPodSandbox for \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\" returns successfully" Sep 9 00:44:07.381367 env[1317]: time="2025-09-09T00:44:07.381333184Z" level=info msg="RemovePodSandbox for \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\"" Sep 9 00:44:07.381547 env[1317]: time="2025-09-09T00:44:07.381505584Z" level=info msg="Forcibly stopping sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\"" Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.416 [WARNING][5336] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b1d432c-5704-4859-93d8-421968ff17c6", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a3080863937314653adb8a172187105f83dd406edf277f377996d79abc3e213a", Pod:"coredns-7c65d6cfc9-2zss5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b374681ad9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.416 [INFO][5336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.416 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" iface="eth0" netns="" Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.416 [INFO][5336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.416 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.433 [INFO][5344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" HandleID="k8s-pod-network.14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.433 [INFO][5344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.433 [INFO][5344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.442 [WARNING][5344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" HandleID="k8s-pod-network.14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.442 [INFO][5344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" HandleID="k8s-pod-network.14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Workload="localhost-k8s-coredns--7c65d6cfc9--2zss5-eth0" Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.444 [INFO][5344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.447336 env[1317]: 2025-09-09 00:44:07.445 [INFO][5336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e" Sep 9 00:44:07.447904 env[1317]: time="2025-09-09T00:44:07.447867906Z" level=info msg="TearDown network for sandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\" successfully" Sep 9 00:44:07.451235 env[1317]: time="2025-09-09T00:44:07.451194062Z" level=info msg="RemovePodSandbox \"14f8c6ae9fd89e20b7a32adfe5e6e400ed8cc7dc6b47a69b40f54cceb171f62e\" returns successfully" Sep 9 00:44:07.451900 env[1317]: time="2025-09-09T00:44:07.451872421Z" level=info msg="StopPodSandbox for \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\"" Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.486 [WARNING][5362] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b44f5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fcbd175-b1d0-445a-87d8-30edc58c5294", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0", Pod:"csi-node-driver-b44f5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ada2e50bd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.486 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.486 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" iface="eth0" netns="" Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.486 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.486 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.508 [INFO][5371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" HandleID="k8s-pod-network.9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.516 [INFO][5371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.516 [INFO][5371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.525 [WARNING][5371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" HandleID="k8s-pod-network.9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.525 [INFO][5371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" HandleID="k8s-pod-network.9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.526 [INFO][5371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.530406 env[1317]: 2025-09-09 00:44:07.528 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:44:07.530959 env[1317]: time="2025-09-09T00:44:07.530902408Z" level=info msg="TearDown network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\" successfully" Sep 9 00:44:07.531072 env[1317]: time="2025-09-09T00:44:07.531053607Z" level=info msg="StopPodSandbox for \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\" returns successfully" Sep 9 00:44:07.531608 env[1317]: time="2025-09-09T00:44:07.531580047Z" level=info msg="RemovePodSandbox for \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\"" Sep 9 00:44:07.531669 env[1317]: time="2025-09-09T00:44:07.531619087Z" level=info msg="Forcibly stopping sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\"" Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.615 [WARNING][5388] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b44f5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fcbd175-b1d0-445a-87d8-30edc58c5294", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1c6814a628205bbb6a01b31ff0c00e5d0558628c035193a9a40fb7a71e5cfc0", Pod:"csi-node-driver-b44f5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ada2e50bd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.616 [INFO][5388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.616 [INFO][5388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" iface="eth0" netns="" Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.616 [INFO][5388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.616 [INFO][5388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.640 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" HandleID="k8s-pod-network.9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.640 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.640 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.651 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" HandleID="k8s-pod-network.9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.652 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" HandleID="k8s-pod-network.9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Workload="localhost-k8s-csi--node--driver--b44f5-eth0" Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.654 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.657286 env[1317]: 2025-09-09 00:44:07.655 [INFO][5388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5" Sep 9 00:44:07.657286 env[1317]: time="2025-09-09T00:44:07.657253258Z" level=info msg="TearDown network for sandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\" successfully" Sep 9 00:44:07.660261 env[1317]: time="2025-09-09T00:44:07.660226975Z" level=info msg="RemovePodSandbox \"9cafdac646227d0a182a35f7051951c4bc513576eda04ef7afd1a8a646d86ab5\" returns successfully" Sep 9 00:44:07.660726 env[1317]: time="2025-09-09T00:44:07.660696614Z" level=info msg="StopPodSandbox for \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\"" Sep 9 00:44:07.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.119:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:07.675888 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:33486.service. Sep 9 00:44:07.679880 kernel: kauditd_printk_skb: 29 callbacks suppressed Sep 9 00:44:07.680026 kernel: audit: type=1130 audit(1757378647.675:467): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.119:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:07.732000 audit[5421]: USER_ACCT pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:07.733000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:07.734813 sshd[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:07.736584 sshd[5421]: Accepted publickey for core from 10.0.0.1 port 33486 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:07.738390 kernel: audit: type=1101 audit(1757378647.732:468): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:07.738447 kernel: audit: type=1103 audit(1757378647.733:469): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:07.738476 kernel: audit: type=1006 audit(1757378647.733:470): pid=5421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 9 00:44:07.739804 kernel: audit: type=1300 audit(1757378647.733:470): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0815ac0 a2=3 a3=1 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:07.733000 audit[5421]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0815ac0 a2=3 a3=1 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:07.743166 kernel: audit: type=1327 audit(1757378647.733:470): proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:07.733000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:07.744331 systemd[1]: Started session-13.scope. Sep 9 00:44:07.744808 systemd-logind[1299]: New session 13 of user core. Sep 9 00:44:07.749000 audit[5421]: USER_START pid=5421 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:07.753000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:07.758291 kernel: audit: type=1105 audit(1757378647.749:471): pid=5421 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:07.758377 kernel: audit: type=1103 audit(1757378647.753:472): pid=5433 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.705 [WARNING][5414] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0", GenerateName:"calico-kube-controllers-89f6f49cb-", Namespace:"calico-system", SelfLink:"", UID:"506277be-dd46-4716-b8b9-1f3976363568", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"89f6f49cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a", Pod:"calico-kube-controllers-89f6f49cb-svnf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b86d719201", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.706 [INFO][5414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.706 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" iface="eth0" netns="" Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.706 [INFO][5414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.706 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.725 [INFO][5425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" HandleID="k8s-pod-network.64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.725 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.726 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.740 [WARNING][5425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" HandleID="k8s-pod-network.64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.740 [INFO][5425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" HandleID="k8s-pod-network.64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.746 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.760406 env[1317]: 2025-09-09 00:44:07.754 [INFO][5414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:44:07.760785 env[1317]: time="2025-09-09T00:44:07.760440817Z" level=info msg="TearDown network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\" successfully" Sep 9 00:44:07.760785 env[1317]: time="2025-09-09T00:44:07.760469377Z" level=info msg="StopPodSandbox for \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\" returns successfully" Sep 9 00:44:07.760983 env[1317]: time="2025-09-09T00:44:07.760947856Z" level=info msg="RemovePodSandbox for \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\"" Sep 9 00:44:07.761050 env[1317]: time="2025-09-09T00:44:07.761001896Z" level=info msg="Forcibly stopping sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\"" Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.795 [WARNING][5446] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0", GenerateName:"calico-kube-controllers-89f6f49cb-", Namespace:"calico-system", SelfLink:"", UID:"506277be-dd46-4716-b8b9-1f3976363568", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"89f6f49cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432b8beddf8e3f5a9d0667d77794fc33245b41267d81bbca8e1f8b5da0b4fe8a", Pod:"calico-kube-controllers-89f6f49cb-svnf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b86d719201", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.795 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.795 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" iface="eth0" netns="" Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.795 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.795 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.839 [INFO][5457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" HandleID="k8s-pod-network.64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.839 [INFO][5457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.839 [INFO][5457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.847 [WARNING][5457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" HandleID="k8s-pod-network.64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.847 [INFO][5457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" HandleID="k8s-pod-network.64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Workload="localhost-k8s-calico--kube--controllers--89f6f49cb--svnf4-eth0" Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.849 [INFO][5457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.857682 env[1317]: 2025-09-09 00:44:07.851 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8" Sep 9 00:44:07.858460 env[1317]: time="2025-09-09T00:44:07.857709742Z" level=info msg="TearDown network for sandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\" successfully" Sep 9 00:44:07.860598 env[1317]: time="2025-09-09T00:44:07.860561419Z" level=info msg="RemovePodSandbox \"64a9ecdf74f21c640cde72676f4543b3b0335d22ef196b2f27fb3b0a9c6064b8\" returns successfully" Sep 9 00:44:07.861022 env[1317]: time="2025-09-09T00:44:07.860989538Z" level=info msg="StopPodSandbox for \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\"" Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.894 [WARNING][5481] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" WorkloadEndpoint="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.894 [INFO][5481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.894 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" iface="eth0" netns="" Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.894 [INFO][5481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.894 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.926 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" HandleID="k8s-pod-network.8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Workload="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.926 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.927 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.935 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" HandleID="k8s-pod-network.8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Workload="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.935 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" HandleID="k8s-pod-network.8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Workload="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.937 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:07.948240 env[1317]: 2025-09-09 00:44:07.946 [INFO][5481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:44:07.948240 env[1317]: time="2025-09-09T00:44:07.948206235Z" level=info msg="TearDown network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\" successfully" Sep 9 00:44:07.948240 env[1317]: time="2025-09-09T00:44:07.948240915Z" level=info msg="StopPodSandbox for \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\" returns successfully" Sep 9 00:44:07.950088 env[1317]: time="2025-09-09T00:44:07.950052873Z" level=info msg="RemovePodSandbox for \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\"" Sep 9 00:44:07.950175 env[1317]: time="2025-09-09T00:44:07.950098673Z" level=info msg="Forcibly stopping sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\"" Sep 9 00:44:08.034990 sshd[5421]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:08.035000 audit[5421]: USER_END pid=5421 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:08.037398 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:33486.service: Deactivated successfully. Sep 9 00:44:08.038225 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:44:08.039267 systemd-logind[1299]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:44:08.035000 audit[5421]: CRED_DISP pid=5421 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:08.040130 systemd-logind[1299]: Removed session 13. Sep 9 00:44:08.042185 kernel: audit: type=1106 audit(1757378648.035:473): pid=5421 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:08.042249 kernel: audit: type=1104 audit(1757378648.035:474): pid=5421 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:08.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.119:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:07.990 [WARNING][5511] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" WorkloadEndpoint="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:07.990 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:07.990 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" iface="eth0" netns="" Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:07.990 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:07.990 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:08.024 [INFO][5521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" HandleID="k8s-pod-network.8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Workload="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:08.024 [INFO][5521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:08.024 [INFO][5521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:08.050 [WARNING][5521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" HandleID="k8s-pod-network.8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Workload="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:08.050 [INFO][5521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" HandleID="k8s-pod-network.8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Workload="localhost-k8s-whisker--7bbf7966b7--gp29k-eth0" Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:08.051 [INFO][5521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:08.055332 env[1317]: 2025-09-09 00:44:08.053 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044" Sep 9 00:44:08.055667 env[1317]: time="2025-09-09T00:44:08.055368429Z" level=info msg="TearDown network for sandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\" successfully" Sep 9 00:44:08.058366 env[1317]: time="2025-09-09T00:44:08.058335346Z" level=info msg="RemovePodSandbox \"8728e9835cecfb4195584b12f0d91993e6287ccbb4ac7cc5d52b2d51a4c4f044\" returns successfully" Sep 9 00:44:08.058867 env[1317]: time="2025-09-09T00:44:08.058840865Z" level=info msg="StopPodSandbox for \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\"" Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.093 [WARNING][5542] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0", GenerateName:"calico-apiserver-55cdd6bdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"53253ab8-84f6-4a5e-8e9a-c2b463038540", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cdd6bdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc", Pod:"calico-apiserver-55cdd6bdb6-9k5zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali128d51d5ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.094 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.094 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" iface="eth0" netns="" Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.094 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.094 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.111 [INFO][5550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" HandleID="k8s-pod-network.3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.112 [INFO][5550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.112 [INFO][5550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.163 [WARNING][5550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" HandleID="k8s-pod-network.3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.163 [INFO][5550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" HandleID="k8s-pod-network.3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.168 [INFO][5550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:08.171798 env[1317]: 2025-09-09 00:44:08.170 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:44:08.172344 env[1317]: time="2025-09-09T00:44:08.172300893Z" level=info msg="TearDown network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\" successfully" Sep 9 00:44:08.172414 env[1317]: time="2025-09-09T00:44:08.172400053Z" level=info msg="StopPodSandbox for \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\" returns successfully" Sep 9 00:44:08.173014 env[1317]: time="2025-09-09T00:44:08.172973492Z" level=info msg="RemovePodSandbox for \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\"" Sep 9 00:44:08.173092 env[1317]: time="2025-09-09T00:44:08.173018732Z" level=info msg="Forcibly stopping sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\"" Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.205 [WARNING][5568] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0", GenerateName:"calico-apiserver-55cdd6bdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"53253ab8-84f6-4a5e-8e9a-c2b463038540", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cdd6bdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee6f08ddab18d58f206da44a8bf78b28a668c5d8bbaa716b96ad82914dbeb6fc", Pod:"calico-apiserver-55cdd6bdb6-9k5zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali128d51d5ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.206 [INFO][5568] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.206 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" iface="eth0" netns="" Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.206 [INFO][5568] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.206 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.223 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" HandleID="k8s-pod-network.3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.223 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.223 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.235 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" HandleID="k8s-pod-network.3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.235 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" HandleID="k8s-pod-network.3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Workload="localhost-k8s-calico--apiserver--55cdd6bdb6--9k5zf-eth0" Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.237 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:44:08.244426 env[1317]: 2025-09-09 00:44:08.241 [INFO][5568] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac" Sep 9 00:44:08.245062 env[1317]: time="2025-09-09T00:44:08.244894728Z" level=info msg="TearDown network for sandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\" successfully" Sep 9 00:44:08.248143 env[1317]: time="2025-09-09T00:44:08.248115204Z" level=info msg="RemovePodSandbox \"3419105345b4c7ef2cbd48fd7a394ea6cb03bf7831ca51b0425b77d5b8978fac\" returns successfully" Sep 9 00:44:10.206811 systemd[1]: run-containerd-runc-k8s.io-8e189121ce2a587ef2f8c5adc0abdf581cd4a78c335df06023c7450172bfbe9a-runc.tPMDFX.mount: Deactivated successfully. Sep 9 00:44:13.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.119:22-10.0.0.1:35360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:13.037672 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:35360.service. Sep 9 00:44:13.038695 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 9 00:44:13.038738 kernel: audit: type=1130 audit(1757378653.036:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.119:22-10.0.0.1:35360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:13.075000 audit[5626]: USER_ACCT pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.076489 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 35360 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:13.078004 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:13.076000 audit[5626]: CRED_ACQ pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.081798 kernel: audit: type=1101 audit(1757378653.075:477): pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.081853 kernel: audit: type=1103 audit(1757378653.076:478): pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.081883 kernel: audit: type=1006 audit(1757378653.076:479): pid=5626 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 9 00:44:13.081953 systemd-logind[1299]: New session 14 of user core. Sep 9 00:44:13.082270 systemd[1]: Started session-14.scope. Sep 9 00:44:13.082893 kernel: audit: type=1300 audit(1757378653.076:479): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4682ee0 a2=3 a3=1 items=0 ppid=1 pid=5626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:13.076000 audit[5626]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4682ee0 a2=3 a3=1 items=0 ppid=1 pid=5626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:13.086109 kernel: audit: type=1327 audit(1757378653.076:479): proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:13.076000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:13.088000 audit[5626]: USER_START pid=5626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.089000 audit[5629]: CRED_ACQ pid=5629 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.095047 kernel: audit: type=1105 audit(1757378653.088:480): pid=5626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.095111 kernel: audit: type=1103 audit(1757378653.089:481): pid=5629 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.273865 sshd[5626]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:13.273000 audit[5626]: USER_END pid=5626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.276143 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:35360.service: Deactivated successfully. Sep 9 00:44:13.277015 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:44:13.273000 audit[5626]: CRED_DISP pid=5626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.281328 kernel: audit: type=1106 audit(1757378653.273:482): pid=5626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.281383 kernel: audit: type=1104 audit(1757378653.273:483): pid=5626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:13.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.119:22-10.0.0.1:35360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:13.281457 systemd-logind[1299]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:44:13.282429 systemd-logind[1299]: Removed session 14. Sep 9 00:44:13.533344 systemd[1]: run-containerd-runc-k8s.io-5b4f4b13f0d5378593f8098125dac60fa47de81c772837d7a3f867dd26962037-runc.2ssdsW.mount: Deactivated successfully. Sep 9 00:44:18.275816 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:35374.service. Sep 9 00:44:18.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.119:22-10.0.0.1:35374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:18.277096 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 9 00:44:18.277147 kernel: audit: type=1130 audit(1757378658.275:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.119:22-10.0.0.1:35374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:18.327000 audit[5683]: USER_ACCT pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.328293 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 35374 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:18.331004 kernel: audit: type=1101 audit(1757378658.327:486): pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.330000 audit[5683]: CRED_ACQ pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.332190 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:18.335841 kernel: audit: type=1103 audit(1757378658.330:487): pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.335904 kernel: audit: type=1006 audit(1757378658.330:488): pid=5683 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 9 00:44:18.335922 kernel: audit: type=1300 audit(1757378658.330:488): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd214ec70 a2=3 a3=1 items=0 ppid=1 pid=5683 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:18.330000 audit[5683]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd214ec70 a2=3 a3=1 items=0 ppid=1 pid=5683 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:18.336561 systemd[1]: Started session-15.scope. Sep 9 00:44:18.336920 systemd-logind[1299]: New session 15 of user core. Sep 9 00:44:18.338413 kernel: audit: type=1327 audit(1757378658.330:488): proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:18.330000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:18.340000 audit[5683]: USER_START pid=5683 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.341000 audit[5686]: CRED_ACQ pid=5686 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.346620 kernel: audit: type=1105 audit(1757378658.340:489): pid=5683 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.346662 kernel: audit: type=1103 audit(1757378658.341:490): pid=5686 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.481114 sshd[5683]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:18.480000 audit[5683]: USER_END pid=5683 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.483605 systemd-logind[1299]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:44:18.481000 audit[5683]: CRED_DISP pid=5683 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.484389 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:35374.service: Deactivated successfully. Sep 9 00:44:18.485506 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:44:18.486614 systemd-logind[1299]: Removed session 15. Sep 9 00:44:18.487269 kernel: audit: type=1106 audit(1757378658.480:491): pid=5683 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.487297 kernel: audit: type=1104 audit(1757378658.481:492): pid=5683 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:18.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.119:22-10.0.0.1:35374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:19.780034 kubelet[2118]: E0909 00:44:19.780001 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:44:23.484368 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:38794.service. Sep 9 00:44:23.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.119:22-10.0.0.1:38794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:23.485488 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 9 00:44:23.485528 kernel: audit: type=1130 audit(1757378663.483:494): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.119:22-10.0.0.1:38794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:23.521000 audit[5703]: USER_ACCT pid=5703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.523095 sshd[5703]: Accepted publickey for core from 10.0.0.1 port 38794 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:23.524570 sshd[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:23.523000 audit[5703]: CRED_ACQ pid=5703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.528243 kernel: audit: type=1101 audit(1757378663.521:495): pid=5703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.528313 kernel: audit: type=1103 audit(1757378663.523:496): pid=5703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.528333 kernel: audit: type=1006 audit(1757378663.523:497): pid=5703 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 9 00:44:23.530047 systemd-logind[1299]: New session 16 of user core. Sep 9 00:44:23.523000 audit[5703]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff76da00 a2=3 a3=1 items=0 ppid=1 pid=5703 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:23.531749 systemd[1]: Started session-16.scope. Sep 9 00:44:23.533326 kernel: audit: type=1300 audit(1757378663.523:497): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff76da00 a2=3 a3=1 items=0 ppid=1 pid=5703 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:23.523000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:23.535000 audit[5703]: USER_START pid=5703 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.539272 kernel: audit: type=1327 audit(1757378663.523:497): proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:23.539325 kernel: audit: type=1105 audit(1757378663.535:498): pid=5703 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.536000 audit[5706]: CRED_ACQ pid=5706 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.542871 kernel: audit: type=1103 audit(1757378663.536:499): pid=5706 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.670100 sshd[5703]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:23.672734 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:38798.service. Sep 9 00:44:23.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.119:22-10.0.0.1:38798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:23.675000 audit[5703]: USER_END pid=5703 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.678067 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:38794.service: Deactivated successfully. Sep 9 00:44:23.679070 systemd-logind[1299]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:44:23.679072 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:44:23.679770 systemd-logind[1299]: Removed session 16. Sep 9 00:44:23.679887 kernel: audit: type=1130 audit(1757378663.671:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.119:22-10.0.0.1:38798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:23.679928 kernel: audit: type=1106 audit(1757378663.675:501): pid=5703 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.675000 audit[5703]: CRED_DISP pid=5703 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.119:22-10.0.0.1:38794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:23.720000 audit[5715]: USER_ACCT pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.721505 sshd[5715]: Accepted publickey for core from 10.0.0.1 port 38798 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:23.721000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.721000 audit[5715]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed092e90 a2=3 a3=1 items=0 ppid=1 pid=5715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:23.721000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:23.723080 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:23.726431 systemd-logind[1299]: New session 17 of user core. Sep 9 00:44:23.727276 systemd[1]: Started session-17.scope. Sep 9 00:44:23.730000 audit[5715]: USER_START pid=5715 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:23.731000 audit[5720]: CRED_ACQ pid=5720 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:24.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.119:22-10.0.0.1:38808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:24.050035 sshd[5715]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:24.052801 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:38808.service. Sep 9 00:44:24.053000 audit[5715]: USER_END pid=5715 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:24.053000 audit[5715]: CRED_DISP pid=5715 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:24.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.119:22-10.0.0.1:38798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:24.056542 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:38798.service: Deactivated successfully. Sep 9 00:44:24.057906 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:44:24.057912 systemd-logind[1299]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:44:24.059370 systemd-logind[1299]: Removed session 17. Sep 9 00:44:24.098000 audit[5727]: USER_ACCT pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:24.099936 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 38808 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:24.099000 audit[5727]: CRED_ACQ pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:24.099000 audit[5727]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc3c30dd0 a2=3 a3=1 items=0 ppid=1 pid=5727 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:24.099000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:24.101330 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:24.105105 systemd-logind[1299]: New session 18 of user core. Sep 9 00:44:24.105870 systemd[1]: Started session-18.scope. Sep 9 00:44:24.109000 audit[5727]: USER_START pid=5727 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:24.110000 audit[5732]: CRED_ACQ pid=5732 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:25.780572 kubelet[2118]: E0909 00:44:25.780531 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:44:25.801000 audit[5745]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5745 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:25.801000 audit[5745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe5cda660 a2=0 a3=1 items=0 ppid=2268 pid=5745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:25.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:25.808000 audit[5745]: NETFILTER_CFG table=nat:129 family=2 entries=26 op=nft_register_rule pid=5745 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:25.808000 audit[5745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe5cda660 a2=0 a3=1 items=0 ppid=2268 pid=5745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:25.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:25.817255 sshd[5727]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:25.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.119:22-10.0.0.1:38822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:25.817709 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:38822.service. Sep 9 00:44:25.817000 audit[5727]: USER_END pid=5727 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:25.817000 audit[5727]: CRED_DISP pid=5727 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:25.819911 systemd-logind[1299]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:44:25.820182 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:38808.service: Deactivated successfully. Sep 9 00:44:25.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.119:22-10.0.0.1:38808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:25.821157 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:44:25.821579 systemd-logind[1299]: Removed session 18. Sep 9 00:44:25.840000 audit[5751]: NETFILTER_CFG table=filter:130 family=2 entries=32 op=nft_register_rule pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:25.840000 audit[5751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffedd3b760 a2=0 a3=1 items=0 ppid=2268 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:25.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:25.849000 audit[5751]: NETFILTER_CFG table=nat:131 family=2 entries=26 op=nft_register_rule pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:25.849000 audit[5751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffedd3b760 a2=0 a3=1 items=0 ppid=2268 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:25.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:25.866000 audit[5746]: USER_ACCT pid=5746 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:25.868037 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:25.867000 audit[5746]: CRED_ACQ pid=5746 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:25.867000 audit[5746]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeaa4b1d0 a2=3 a3=1 items=0 ppid=1 pid=5746 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:25.867000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:25.869477 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:25.873121 systemd-logind[1299]: New session 19 of user core. Sep 9 00:44:25.873905 systemd[1]: Started session-19.scope. Sep 9 00:44:25.876000 audit[5746]: USER_START pid=5746 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:25.878000 audit[5753]: CRED_ACQ pid=5753 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:26.362731 sshd[5746]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:26.363000 audit[5746]: USER_END pid=5746 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:26.363000 audit[5746]: CRED_DISP pid=5746 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:26.365236 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:38838.service. Sep 9 00:44:26.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.119:22-10.0.0.1:38838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:26.370080 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:38822.service: Deactivated successfully. Sep 9 00:44:26.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.119:22-10.0.0.1:38822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:26.371052 systemd-logind[1299]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:44:26.371098 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:44:26.371776 systemd-logind[1299]: Removed session 19. Sep 9 00:44:26.406000 audit[5760]: USER_ACCT pid=5760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:26.407924 sshd[5760]: Accepted publickey for core from 10.0.0.1 port 38838 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:26.407000 audit[5760]: CRED_ACQ pid=5760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:26.407000 audit[5760]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe19bcb30 a2=3 a3=1 items=0 ppid=1 pid=5760 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:26.407000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:26.409021 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:26.412732 systemd-logind[1299]: New session 20 of user core. Sep 9 00:44:26.413528 systemd[1]: Started session-20.scope. Sep 9 00:44:26.416000 audit[5760]: USER_START pid=5760 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:26.417000 audit[5765]: CRED_ACQ pid=5765 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:26.530628 sshd[5760]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:26.530000 audit[5760]: USER_END pid=5760 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:26.530000 audit[5760]: CRED_DISP pid=5760 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:26.533260 systemd-logind[1299]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:44:26.533491 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:38838.service: Deactivated successfully. Sep 9 00:44:26.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.119:22-10.0.0.1:38838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:26.534354 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:44:26.534745 systemd-logind[1299]: Removed session 20. Sep 9 00:44:27.780326 kubelet[2118]: E0909 00:44:27.780294 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:44:31.022000 audit[5780]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:31.026399 kernel: kauditd_printk_skb: 57 callbacks suppressed Sep 9 00:44:31.026498 kernel: audit: type=1325 audit(1757378671.022:543): table=filter:132 family=2 entries=20 op=nft_register_rule pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:31.026527 kernel: audit: type=1300 audit(1757378671.022:543): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffef4e74e0 a2=0 a3=1 items=0 ppid=2268 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:31.022000 audit[5780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffef4e74e0 a2=0 a3=1 items=0 ppid=2268 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:31.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:31.031407 kernel: audit: type=1327 audit(1757378671.022:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:31.032000 audit[5780]: NETFILTER_CFG table=nat:133 family=2 entries=110 op=nft_register_chain pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:31.032000 audit[5780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffef4e74e0 a2=0 a3=1 items=0 ppid=2268 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:31.039161 kernel: audit: type=1325 audit(1757378671.032:544): table=nat:133 family=2 entries=110 op=nft_register_chain pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 9 00:44:31.039233 kernel: audit: type=1300 audit(1757378671.032:544): arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffef4e74e0 a2=0 a3=1 items=0 ppid=2268 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:31.039267 kernel: audit: type=1327 audit(1757378671.032:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:31.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 9 00:44:31.533717 systemd[1]: Started sshd@20-10.0.0.119:22-10.0.0.1:34500.service. Sep 9 00:44:31.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.119:22-10.0.0.1:34500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:31.537008 kernel: audit: type=1130 audit(1757378671.532:545): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.119:22-10.0.0.1:34500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:31.571000 audit[5782]: USER_ACCT pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:31.572850 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 34500 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:31.573991 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:31.572000 audit[5782]: CRED_ACQ pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:31.580280 kernel: audit: type=1101 audit(1757378671.571:546): pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:31.580354 kernel: audit: type=1103 audit(1757378671.572:547): pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:31.582109 kernel: audit: type=1006 audit(1757378671.572:548): pid=5782 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Sep 9 00:44:31.572000 audit[5782]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd5a1770 a2=3 a3=1 items=0 ppid=1 pid=5782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:31.572000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:31.585119 systemd[1]: Started session-21.scope. Sep 9 00:44:31.585636 systemd-logind[1299]: New session 21 of user core. Sep 9 00:44:31.588000 audit[5782]: USER_START pid=5782 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:31.590000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:31.754197 sshd[5782]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:31.765000 audit[5782]: USER_END pid=5782 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:31.765000 audit[5782]: CRED_DISP pid=5782 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:31.768985 systemd-logind[1299]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:44:31.769610 systemd[1]: sshd@20-10.0.0.119:22-10.0.0.1:34500.service: Deactivated successfully. Sep 9 00:44:31.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.119:22-10.0.0.1:34500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:31.770559 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:44:31.771039 systemd-logind[1299]: Removed session 21. Sep 9 00:44:31.779913 kubelet[2118]: E0909 00:44:31.779884 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:44:36.758056 systemd[1]: Started sshd@21-10.0.0.119:22-10.0.0.1:34502.service. Sep 9 00:44:36.762332 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 9 00:44:36.762414 kernel: audit: type=1130 audit(1757378676.757:554): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.119:22-10.0.0.1:34502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:36.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.119:22-10.0.0.1:34502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:36.797000 audit[5818]: USER_ACCT pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.798000 audit[5818]: CRED_ACQ pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.800339 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:36.802446 sshd[5818]: Accepted publickey for core from 10.0.0.1 port 34502 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:36.803592 kernel: audit: type=1101 audit(1757378676.797:555): pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.803633 kernel: audit: type=1103 audit(1757378676.798:556): pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.803664 kernel: audit: type=1006 audit(1757378676.798:557): pid=5818 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 9 00:44:36.798000 audit[5818]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffced53160 a2=3 a3=1 items=0 ppid=1 pid=5818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:36.807659 kernel: audit: type=1300 audit(1757378676.798:557): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffced53160 a2=3 a3=1 items=0 ppid=1 pid=5818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:36.807712 kernel: audit: type=1327 audit(1757378676.798:557): proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:36.798000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:36.810748 systemd-logind[1299]: New session 22 of user core. Sep 9 00:44:36.811567 systemd[1]: Started session-22.scope. Sep 9 00:44:36.815000 audit[5818]: USER_START pid=5818 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.819000 audit[5821]: CRED_ACQ pid=5821 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.823238 kernel: audit: type=1105 audit(1757378676.815:558): pid=5818 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.823310 kernel: audit: type=1103 audit(1757378676.819:559): pid=5821 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.972867 sshd[5818]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:36.973000 audit[5818]: USER_END pid=5818 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.973000 audit[5818]: CRED_DISP pid=5818 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.976244 systemd[1]: sshd@21-10.0.0.119:22-10.0.0.1:34502.service: Deactivated successfully. Sep 9 00:44:36.977083 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:44:36.979315 kernel: audit: type=1106 audit(1757378676.973:560): pid=5818 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.979397 kernel: audit: type=1104 audit(1757378676.973:561): pid=5818 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:36.979892 systemd-logind[1299]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:44:36.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.119:22-10.0.0.1:34502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:36.980823 systemd-logind[1299]: Removed session 22. Sep 9 00:44:38.269059 systemd[1]: run-containerd-runc-k8s.io-8e189121ce2a587ef2f8c5adc0abdf581cd4a78c335df06023c7450172bfbe9a-runc.mnitF9.mount: Deactivated successfully. Sep 9 00:44:41.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.119:22-10.0.0.1:58140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:41.976458 systemd[1]: Started sshd@22-10.0.0.119:22-10.0.0.1:58140.service. Sep 9 00:44:41.977381 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 9 00:44:41.977437 kernel: audit: type=1130 audit(1757378681.975:563): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.119:22-10.0.0.1:58140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:42.019000 audit[5852]: USER_ACCT pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.020509 sshd[5852]: Accepted publickey for core from 10.0.0.1 port 58140 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:44:42.022280 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:44:42.019000 audit[5852]: CRED_ACQ pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.026136 kernel: audit: type=1101 audit(1757378682.019:564): pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.026215 kernel: audit: type=1103 audit(1757378682.019:565): pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.028055 kernel: audit: type=1006 audit(1757378682.019:566): pid=5852 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 9 00:44:42.019000 audit[5852]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd0aff8d0 a2=3 a3=1 items=0 ppid=1 pid=5852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:42.031572 kernel: audit: type=1300 audit(1757378682.019:566): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd0aff8d0 a2=3 a3=1 items=0 ppid=1 pid=5852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:44:42.033250 kernel: audit: type=1327 audit(1757378682.019:566): proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:42.019000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 9 00:44:42.033918 systemd-logind[1299]: New session 23 of user core. Sep 9 00:44:42.034373 systemd[1]: Started session-23.scope. Sep 9 00:44:42.039000 audit[5852]: USER_START pid=5852 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.043000 audit[5855]: CRED_ACQ pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.046648 kernel: audit: type=1105 audit(1757378682.039:567): pid=5852 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.046722 kernel: audit: type=1103 audit(1757378682.043:568): pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.240932 sshd[5852]: pam_unix(sshd:session): session closed for user core Sep 9 00:44:42.240000 audit[5852]: USER_END pid=5852 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.243386 systemd[1]: sshd@22-10.0.0.119:22-10.0.0.1:58140.service: Deactivated successfully. Sep 9 00:44:42.244232 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:44:42.240000 audit[5852]: CRED_DISP pid=5852 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.247465 kernel: audit: type=1106 audit(1757378682.240:569): pid=5852 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.247533 kernel: audit: type=1104 audit(1757378682.240:570): pid=5852 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 9 00:44:42.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.119:22-10.0.0.1:58140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:44:42.248013 systemd-logind[1299]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:44:42.248844 systemd-logind[1299]: Removed session 23.