May 8 00:30:08.737678 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:30:08.737709 kernel: Linux version 5.15.180-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Wed May 7 23:24:31 -00 2025 May 8 00:30:08.737717 kernel: efi: EFI v2.70 by EDK II May 8 00:30:08.737723 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 8 00:30:08.737729 kernel: random: crng init done May 8 00:30:08.737735 kernel: ACPI: Early table checksum verification disabled May 8 00:30:08.737741 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 8 00:30:08.737748 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:30:08.737754 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:30:08.737760 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:30:08.737766 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:30:08.737771 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:30:08.737777 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:30:08.737782 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:30:08.737790 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:30:08.737796 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:30:08.737803 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:30:08.737809 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:30:08.737815 kernel: NUMA: Failed to initialise from firmware May 8 00:30:08.737821 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:30:08.737826 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 8 00:30:08.737832 kernel: Zone ranges: May 8 00:30:08.737838 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:30:08.737845 kernel: DMA32 empty May 8 00:30:08.737851 kernel: Normal empty May 8 00:30:08.737857 kernel: Movable zone start for each node May 8 00:30:08.737863 kernel: Early memory node ranges May 8 00:30:08.737869 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 8 00:30:08.737875 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 8 00:30:08.737881 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 8 00:30:08.737886 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 8 00:30:08.737892 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 8 00:30:08.737898 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 8 00:30:08.737904 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 8 00:30:08.737910 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:30:08.737916 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:30:08.737922 kernel: psci: probing for conduit method from ACPI. May 8 00:30:08.737928 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:30:08.737934 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:30:08.737940 kernel: psci: Trusted OS migration not required May 8 00:30:08.737948 kernel: psci: SMC Calling Convention v1.1 May 8 00:30:08.737955 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:30:08.737962 kernel: ACPI: SRAT not present May 8 00:30:08.737969 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 8 00:30:08.737975 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 8 00:30:08.737982 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:30:08.737988 kernel: Detected PIPT I-cache on CPU0 May 8 00:30:08.737994 kernel: CPU features: detected: GIC system register CPU interface May 8 00:30:08.738000 kernel: CPU features: detected: Hardware dirty bit management May 8 00:30:08.738007 kernel: CPU features: detected: Spectre-v4 May 8 00:30:08.738013 kernel: CPU features: detected: Spectre-BHB May 8 00:30:08.738020 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:30:08.738027 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:30:08.738033 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:30:08.738039 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:30:08.738046 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:30:08.738052 kernel: Policy zone: DMA May 8 00:30:08.738059 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3816e7a7ab4f80032c381006006d7d5ba477c6a86a1527e782723d869b29d497 May 8 00:30:08.738066 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:30:08.738072 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:30:08.738078 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:30:08.738084 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:30:08.738092 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) May 8 00:30:08.738099 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:30:08.738105 kernel: trace event string verifier disabled May 8 00:30:08.738111 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:30:08.738118 kernel: rcu: RCU event tracing is enabled. May 8 00:30:08.738124 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:30:08.738130 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:30:08.738137 kernel: Tracing variant of Tasks RCU enabled. May 8 00:30:08.738143 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:30:08.738149 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:30:08.738155 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:30:08.738163 kernel: GICv3: 256 SPIs implemented May 8 00:30:08.738169 kernel: GICv3: 0 Extended SPIs implemented May 8 00:30:08.738176 kernel: GICv3: Distributor has no Range Selector support May 8 00:30:08.738182 kernel: Root IRQ handler: gic_handle_irq May 8 00:30:08.738188 kernel: GICv3: 16 PPIs implemented May 8 00:30:08.738194 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:30:08.738200 kernel: ACPI: SRAT not present May 8 00:30:08.738206 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:30:08.738212 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:30:08.738219 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 8 00:30:08.738225 kernel: GICv3: using LPI property table @0x00000000400d0000 May 8 00:30:08.738231 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 8 00:30:08.738239 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:30:08.738245 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:30:08.738252 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:30:08.738258 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:30:08.738264 kernel: arm-pv: using stolen time PV May 8 00:30:08.738280 kernel: Console: colour dummy device 80x25 May 8 00:30:08.738287 kernel: ACPI: Core revision 20210730 May 8 00:30:08.738297 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:30:08.738304 kernel: pid_max: default: 32768 minimum: 301 May 8 00:30:08.738311 kernel: LSM: Security Framework initializing May 8 00:30:08.738319 kernel: SELinux: Initializing. May 8 00:30:08.738326 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:30:08.738333 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:30:08.738340 kernel: rcu: Hierarchical SRCU implementation. May 8 00:30:08.738346 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:30:08.738353 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:30:08.738360 kernel: Remapping and enabling EFI services. May 8 00:30:08.738368 kernel: smp: Bringing up secondary CPUs ... May 8 00:30:08.738376 kernel: Detected PIPT I-cache on CPU1 May 8 00:30:08.738385 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:30:08.738392 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 8 00:30:08.738399 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:30:08.738405 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:30:08.738412 kernel: Detected PIPT I-cache on CPU2 May 8 00:30:08.738418 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:30:08.738425 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 8 00:30:08.738431 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:30:08.738437 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:30:08.738444 kernel: Detected PIPT I-cache on CPU3 May 8 00:30:08.738451 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:30:08.738458 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 8 00:30:08.738464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:30:08.738471 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:30:08.738482 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:30:08.738490 kernel: SMP: Total of 4 processors activated. May 8 00:30:08.738497 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:30:08.738504 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:30:08.738511 kernel: CPU features: detected: Common not Private translations May 8 00:30:08.738517 kernel: CPU features: detected: CRC32 instructions May 8 00:30:08.738524 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:30:08.738531 kernel: CPU features: detected: LSE atomic instructions May 8 00:30:08.738539 kernel: CPU features: detected: Privileged Access Never May 8 00:30:08.738546 kernel: CPU features: detected: RAS Extension Support May 8 00:30:08.738552 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:30:08.738559 kernel: CPU: All CPU(s) started at EL1 May 8 00:30:08.738566 kernel: alternatives: patching kernel code May 8 00:30:08.738574 kernel: devtmpfs: initialized May 8 00:30:08.738581 kernel: KASLR enabled May 8 00:30:08.738587 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:30:08.738595 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:30:08.738601 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:30:08.738608 kernel: SMBIOS 3.0.0 present. May 8 00:30:08.738615 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 8 00:30:08.738622 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:30:08.738629 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:30:08.738637 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:30:08.738643 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:30:08.738651 kernel: audit: initializing netlink subsys (disabled) May 8 00:30:08.738658 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 May 8 00:30:08.738664 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:30:08.738671 kernel: cpuidle: using governor menu May 8 00:30:08.738678 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:30:08.738689 kernel: ASID allocator initialised with 32768 entries May 8 00:30:08.738696 kernel: ACPI: bus type PCI registered May 8 00:30:08.738705 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:30:08.738711 kernel: Serial: AMBA PL011 UART driver May 8 00:30:08.738718 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:30:08.738725 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:30:08.738732 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:30:08.738739 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:30:08.738745 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:30:08.738752 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:30:08.738759 kernel: ACPI: Added _OSI(Module Device) May 8 00:30:08.738767 kernel: ACPI: Added _OSI(Processor Device) May 8 00:30:08.738773 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:30:08.738780 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:30:08.738787 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 8 00:30:08.738793 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 8 00:30:08.738800 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 8 00:30:08.738807 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:30:08.738814 kernel: ACPI: Interpreter enabled May 8 00:30:08.738820 kernel: ACPI: Using GIC for interrupt routing May 8 00:30:08.738828 kernel: ACPI: MCFG table detected, 1 entries May 8 00:30:08.738835 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:30:08.738842 kernel: printk: console [ttyAMA0] enabled May 8 00:30:08.738849 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:30:08.738998 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:30:08.739066 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:30:08.739125 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:30:08.739187 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:30:08.739245 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:30:08.739254 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:30:08.739261 kernel: PCI host bridge to bus 0000:00 May 8 00:30:08.739360 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:30:08.739416 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:30:08.739471 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:30:08.739524 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:30:08.739600 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:30:08.739671 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:30:08.739749 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:30:08.739813 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:30:08.739875 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:30:08.739938 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:30:08.740006 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:30:08.740070 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:30:08.740127 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:30:08.740185 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:30:08.740239 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:30:08.740248 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:30:08.740255 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:30:08.740262 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:30:08.740295 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:30:08.740304 kernel: iommu: Default domain type: Translated May 8 00:30:08.740311 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:30:08.740318 kernel: vgaarb: loaded May 8 00:30:08.740324 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:30:08.740332 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:30:08.740338 kernel: PTP clock support registered May 8 00:30:08.740345 kernel: Registered efivars operations May 8 00:30:08.740352 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:30:08.740361 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:30:08.740368 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:30:08.740375 kernel: pnp: PnP ACPI init May 8 00:30:08.740454 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:30:08.740464 kernel: pnp: PnP ACPI: found 1 devices May 8 00:30:08.740471 kernel: NET: Registered PF_INET protocol family May 8 00:30:08.740478 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:30:08.740485 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:30:08.740494 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:30:08.740501 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:30:08.740508 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 8 00:30:08.740516 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:30:08.740522 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:30:08.740529 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:30:08.740536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:30:08.740543 kernel: PCI: CLS 0 bytes, default 64 May 8 00:30:08.740550 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:30:08.740558 kernel: kvm [1]: HYP mode not available May 8 00:30:08.740565 kernel: Initialise system trusted keyrings May 8 00:30:08.740572 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:30:08.740579 kernel: Key type asymmetric registered May 8 00:30:08.740586 kernel: Asymmetric key parser 'x509' registered May 8 00:30:08.740592 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 8 00:30:08.740599 kernel: io scheduler mq-deadline registered May 8 00:30:08.740606 kernel: io scheduler kyber registered May 8 00:30:08.740613 kernel: io scheduler bfq registered May 8 00:30:08.740621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:30:08.740627 kernel: ACPI: button: Power Button [PWRB] May 8 00:30:08.740635 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:30:08.740709 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:30:08.740719 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:30:08.740726 kernel: thunder_xcv, ver 1.0 May 8 00:30:08.740733 kernel: thunder_bgx, ver 1.0 May 8 00:30:08.740740 kernel: nicpf, ver 1.0 May 8 00:30:08.740747 kernel: nicvf, ver 1.0 May 8 00:30:08.740821 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:30:08.740881 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:30:08 UTC (1746664208) May 8 00:30:08.740890 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:30:08.740897 kernel: NET: Registered PF_INET6 protocol family May 8 00:30:08.740904 kernel: Segment Routing with IPv6 May 8 00:30:08.740911 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:30:08.740918 kernel: NET: Registered PF_PACKET protocol family May 8 00:30:08.740924 kernel: Key type dns_resolver registered May 8 00:30:08.740933 kernel: registered taskstats version 1 May 8 00:30:08.740940 kernel: Loading compiled-in X.509 certificates May 8 00:30:08.740947 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.180-flatcar: 47302b466ab2df930dd804d2ee9c8ab44de4e2dc' May 8 00:30:08.740954 kernel: Key type .fscrypt registered May 8 00:30:08.740961 kernel: Key type fscrypt-provisioning registered May 8 00:30:08.740968 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:30:08.740975 kernel: ima: Allocated hash algorithm: sha1 May 8 00:30:08.740982 kernel: ima: No architecture policies found May 8 00:30:08.740988 kernel: clk: Disabling unused clocks May 8 00:30:08.740997 kernel: Freeing unused kernel memory: 36416K May 8 00:30:08.741004 kernel: Run /init as init process May 8 00:30:08.741011 kernel: with arguments: May 8 00:30:08.741018 kernel: /init May 8 00:30:08.741024 kernel: with environment: May 8 00:30:08.741031 kernel: HOME=/ May 8 00:30:08.741038 kernel: TERM=linux May 8 00:30:08.741045 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:30:08.741054 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:30:08.741066 systemd[1]: Detected virtualization kvm. May 8 00:30:08.741073 systemd[1]: Detected architecture arm64. May 8 00:30:08.741080 systemd[1]: Running in initrd. May 8 00:30:08.741087 systemd[1]: No hostname configured, using default hostname. May 8 00:30:08.741095 systemd[1]: Hostname set to . May 8 00:30:08.741102 systemd[1]: Initializing machine ID from VM UUID. May 8 00:30:08.741109 systemd[1]: Queued start job for default target initrd.target. May 8 00:30:08.741118 systemd[1]: Started systemd-ask-password-console.path. May 8 00:30:08.741125 systemd[1]: Reached target cryptsetup.target. May 8 00:30:08.741133 systemd[1]: Reached target paths.target. May 8 00:30:08.741140 systemd[1]: Reached target slices.target. May 8 00:30:08.741147 systemd[1]: Reached target swap.target. May 8 00:30:08.741154 systemd[1]: Reached target timers.target. May 8 00:30:08.741162 systemd[1]: Listening on iscsid.socket. May 8 00:30:08.741170 systemd[1]: Listening on iscsiuio.socket. May 8 00:30:08.741178 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:30:08.741185 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:30:08.741192 systemd[1]: Listening on systemd-journald.socket. May 8 00:30:08.741199 systemd[1]: Listening on systemd-networkd.socket. May 8 00:30:08.741207 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:30:08.741214 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:30:08.741221 systemd[1]: Reached target sockets.target. May 8 00:30:08.741229 systemd[1]: Starting kmod-static-nodes.service... May 8 00:30:08.741237 systemd[1]: Finished network-cleanup.service. May 8 00:30:08.741244 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:30:08.741251 systemd[1]: Starting systemd-journald.service... May 8 00:30:08.741259 systemd[1]: Starting systemd-modules-load.service... May 8 00:30:08.741266 systemd[1]: Starting systemd-resolved.service... May 8 00:30:08.741294 systemd[1]: Starting systemd-vconsole-setup.service... May 8 00:30:08.741302 systemd[1]: Finished kmod-static-nodes.service. May 8 00:30:08.741309 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:30:08.741317 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:30:08.741326 systemd[1]: Finished systemd-vconsole-setup.service. May 8 00:30:08.741333 systemd[1]: Starting dracut-cmdline-ask.service... May 8 00:30:08.741344 systemd-journald[291]: Journal started May 8 00:30:08.741389 systemd-journald[291]: Runtime Journal (/run/log/journal/400686d2215d4f76aac7c8490d926916) is 6.0M, max 48.7M, 42.6M free. May 8 00:30:08.733178 systemd-modules-load[292]: Inserted module 'overlay' May 8 00:30:08.742821 systemd[1]: Started systemd-journald.service. May 8 00:30:08.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.746286 kernel: audit: type=1130 audit(1746664208.743:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.747387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:30:08.752817 kernel: audit: type=1130 audit(1746664208.748:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.763308 kernel: audit: type=1130 audit(1746664208.760:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.760481 systemd[1]: Finished dracut-cmdline-ask.service. May 8 00:30:08.763250 systemd[1]: Starting dracut-cmdline.service... May 8 00:30:08.767434 systemd-resolved[293]: Positive Trust Anchors: May 8 00:30:08.767450 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:30:08.767478 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:30:08.780025 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:30:08.780056 kernel: Bridge firewalling registered May 8 00:30:08.780067 kernel: audit: type=1130 audit(1746664208.775:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.773218 systemd-resolved[293]: Defaulting to hostname 'linux'. May 8 00:30:08.775331 systemd[1]: Started systemd-resolved.service. May 8 00:30:08.775573 systemd-modules-load[292]: Inserted module 'br_netfilter' May 8 00:30:08.776217 systemd[1]: Reached target nss-lookup.target. May 8 00:30:08.783563 dracut-cmdline[309]: dracut-dracut-053 May 8 00:30:08.785991 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3816e7a7ab4f80032c381006006d7d5ba477c6a86a1527e782723d869b29d497 May 8 00:30:08.791298 kernel: SCSI subsystem initialized May 8 00:30:08.798305 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:30:08.798354 kernel: device-mapper: uevent: version 1.0.3 May 8 00:30:08.798365 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 8 00:30:08.801711 systemd-modules-load[292]: Inserted module 'dm_multipath' May 8 00:30:08.802515 systemd[1]: Finished systemd-modules-load.service. May 8 00:30:08.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.806292 kernel: audit: type=1130 audit(1746664208.802:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.803976 systemd[1]: Starting systemd-sysctl.service... May 8 00:30:08.811862 systemd[1]: Finished systemd-sysctl.service. May 8 00:30:08.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.815377 kernel: audit: type=1130 audit(1746664208.812:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.855301 kernel: Loading iSCSI transport class v2.0-870. May 8 00:30:08.867303 kernel: iscsi: registered transport (tcp) May 8 00:30:08.882300 kernel: iscsi: registered transport (qla4xxx) May 8 00:30:08.882338 kernel: QLogic iSCSI HBA Driver May 8 00:30:08.920125 systemd[1]: Finished dracut-cmdline.service. May 8 00:30:08.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.921788 systemd[1]: Starting dracut-pre-udev.service... May 8 00:30:08.924191 kernel: audit: type=1130 audit(1746664208.920:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:08.973301 kernel: raid6: neonx8 gen() 13744 MB/s May 8 00:30:08.991295 kernel: raid6: neonx8 xor() 8318 MB/s May 8 00:30:09.008295 kernel: raid6: neonx4 gen() 11245 MB/s May 8 00:30:09.025294 kernel: raid6: neonx4 xor() 10995 MB/s May 8 00:30:09.042288 kernel: raid6: neonx2 gen() 7117 MB/s May 8 00:30:09.059295 kernel: raid6: neonx2 xor() 10456 MB/s May 8 00:30:09.076286 kernel: raid6: neonx1 gen() 10517 MB/s May 8 00:30:09.093285 kernel: raid6: neonx1 xor() 8753 MB/s May 8 00:30:09.110286 kernel: raid6: int64x8 gen() 6268 MB/s May 8 00:30:09.127287 kernel: raid6: int64x8 xor() 3541 MB/s May 8 00:30:09.144285 kernel: raid6: int64x4 gen() 7214 MB/s May 8 00:30:09.161287 kernel: raid6: int64x4 xor() 3852 MB/s May 8 00:30:09.178288 kernel: raid6: int64x2 gen() 6147 MB/s May 8 00:30:09.195286 kernel: raid6: int64x2 xor() 3317 MB/s May 8 00:30:09.212287 kernel: raid6: int64x1 gen() 5043 MB/s May 8 00:30:09.229722 kernel: raid6: int64x1 xor() 2645 MB/s May 8 00:30:09.229737 kernel: raid6: using algorithm neonx8 gen() 13744 MB/s May 8 00:30:09.229747 kernel: raid6: .... xor() 8318 MB/s, rmw enabled May 8 00:30:09.229759 kernel: raid6: using neon recovery algorithm May 8 00:30:09.241503 kernel: xor: measuring software checksum speed May 8 00:30:09.241525 kernel: 8regs : 17235 MB/sec May 8 00:30:09.242638 kernel: 32regs : 20702 MB/sec May 8 00:30:09.242649 kernel: arm64_neon : 27719 MB/sec May 8 00:30:09.242659 kernel: xor: using function: arm64_neon (27719 MB/sec) May 8 00:30:09.302298 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 8 00:30:09.313316 systemd[1]: Finished dracut-pre-udev.service. May 8 00:30:09.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:09.316000 audit: BPF prog-id=7 op=LOAD May 8 00:30:09.316000 audit: BPF prog-id=8 op=LOAD May 8 00:30:09.317291 kernel: audit: type=1130 audit(1746664209.313:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:09.317312 kernel: audit: type=1334 audit(1746664209.316:10): prog-id=7 op=LOAD May 8 00:30:09.317611 systemd[1]: Starting systemd-udevd.service... May 8 00:30:09.335393 systemd-udevd[491]: Using default interface naming scheme 'v252'. May 8 00:30:09.339002 systemd[1]: Started systemd-udevd.service. May 8 00:30:09.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:09.341836 systemd[1]: Starting dracut-pre-trigger.service... May 8 00:30:09.355811 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation May 8 00:30:09.385638 systemd[1]: Finished dracut-pre-trigger.service. May 8 00:30:09.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:09.387409 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:30:09.422301 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:30:09.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:09.452764 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:30:09.457244 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:30:09.457266 kernel: GPT:9289727 != 19775487 May 8 00:30:09.457289 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:30:09.457299 kernel: GPT:9289727 != 19775487 May 8 00:30:09.457308 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:30:09.457316 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:30:09.472300 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (537) May 8 00:30:09.475225 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 8 00:30:09.478848 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 8 00:30:09.481832 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 8 00:30:09.482644 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 8 00:30:09.489269 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:30:09.490827 systemd[1]: Starting disk-uuid.service... May 8 00:30:09.496995 disk-uuid[560]: Primary Header is updated. May 8 00:30:09.496995 disk-uuid[560]: Secondary Entries is updated. May 8 00:30:09.496995 disk-uuid[560]: Secondary Header is updated. May 8 00:30:09.501302 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:30:10.514052 disk-uuid[561]: The operation has completed successfully. May 8 00:30:10.515350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:30:10.535909 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:30:10.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.536014 systemd[1]: Finished disk-uuid.service. May 8 00:30:10.537629 systemd[1]: Starting verity-setup.service... May 8 00:30:10.552308 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:30:10.573081 systemd[1]: Found device dev-mapper-usr.device. May 8 00:30:10.574519 systemd[1]: Mounting sysusr-usr.mount... May 8 00:30:10.575343 systemd[1]: Finished verity-setup.service. May 8 00:30:10.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.627311 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 8 00:30:10.626634 systemd[1]: Mounted sysusr-usr.mount. May 8 00:30:10.627319 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 8 00:30:10.628053 systemd[1]: Starting ignition-setup.service... May 8 00:30:10.630112 systemd[1]: Starting parse-ip-for-networkd.service... May 8 00:30:10.637437 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:30:10.637595 kernel: BTRFS info (device vda6): using free space tree May 8 00:30:10.637621 kernel: BTRFS info (device vda6): has skinny extents May 8 00:30:10.645742 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:30:10.652400 systemd[1]: Finished ignition-setup.service. May 8 00:30:10.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.653887 systemd[1]: Starting ignition-fetch-offline.service... May 8 00:30:10.727972 systemd[1]: Finished parse-ip-for-networkd.service. May 8 00:30:10.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.729000 audit: BPF prog-id=9 op=LOAD May 8 00:30:10.730161 systemd[1]: Starting systemd-networkd.service... May 8 00:30:10.739064 ignition[646]: Ignition 2.14.0 May 8 00:30:10.739074 ignition[646]: Stage: fetch-offline May 8 00:30:10.739115 ignition[646]: no configs at "/usr/lib/ignition/base.d" May 8 00:30:10.739125 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:30:10.739266 ignition[646]: parsed url from cmdline: "" May 8 00:30:10.739269 ignition[646]: no config URL provided May 8 00:30:10.739287 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:30:10.739295 ignition[646]: no config at "/usr/lib/ignition/user.ign" May 8 00:30:10.739317 ignition[646]: op(1): [started] loading QEMU firmware config module May 8 00:30:10.739322 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:30:10.749635 ignition[646]: op(1): [finished] loading QEMU firmware config module May 8 00:30:10.749688 ignition[646]: QEMU firmware config was not found. Ignoring... May 8 00:30:10.754640 systemd-networkd[738]: lo: Link UP May 8 00:30:10.754654 systemd-networkd[738]: lo: Gained carrier May 8 00:30:10.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.755025 systemd-networkd[738]: Enumeration completed May 8 00:30:10.755207 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:30:10.755301 systemd[1]: Started systemd-networkd.service. May 8 00:30:10.756179 systemd[1]: Reached target network.target. May 8 00:30:10.756444 systemd-networkd[738]: eth0: Link UP May 8 00:30:10.756447 systemd-networkd[738]: eth0: Gained carrier May 8 00:30:10.757839 systemd[1]: Starting iscsiuio.service... May 8 00:30:10.768809 systemd[1]: Started iscsiuio.service. May 8 00:30:10.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.770371 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:30:10.770380 systemd[1]: Starting iscsid.service... May 8 00:30:10.773707 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 8 00:30:10.773707 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 8 00:30:10.773707 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 8 00:30:10.773707 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. May 8 00:30:10.773707 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 8 00:30:10.773707 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 8 00:30:10.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.776565 systemd[1]: Started iscsid.service. May 8 00:30:10.780979 systemd[1]: Starting dracut-initqueue.service... May 8 00:30:10.791127 systemd[1]: Finished dracut-initqueue.service. May 8 00:30:10.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.791982 systemd[1]: Reached target remote-fs-pre.target. May 8 00:30:10.793220 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:30:10.794565 systemd[1]: Reached target remote-fs.target. May 8 00:30:10.796537 systemd[1]: Starting dracut-pre-mount.service... May 8 00:30:10.804026 systemd[1]: Finished dracut-pre-mount.service. May 8 00:30:10.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.811354 ignition[646]: parsing config with SHA512: f18498f646913a3d5b9ccaf538d8922309e007b190c5cf9125cc8d0028f4d48a8de5ca4864344eca2195ce9c396b50b87d07ac71e1a48db8c4c83c40538921f4 May 8 00:30:10.824744 unknown[646]: fetched base config from "system" May 8 00:30:10.824755 unknown[646]: fetched user config from "qemu" May 8 00:30:10.825323 ignition[646]: fetch-offline: fetch-offline passed May 8 00:30:10.826233 systemd[1]: Finished ignition-fetch-offline.service. May 8 00:30:10.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.825381 ignition[646]: Ignition finished successfully May 8 00:30:10.827507 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:30:10.828293 systemd[1]: Starting ignition-kargs.service... May 8 00:30:10.837730 ignition[759]: Ignition 2.14.0 May 8 00:30:10.837740 ignition[759]: Stage: kargs May 8 00:30:10.837836 ignition[759]: no configs at "/usr/lib/ignition/base.d" May 8 00:30:10.837846 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:30:10.840360 systemd[1]: Finished ignition-kargs.service. May 8 00:30:10.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.838793 ignition[759]: kargs: kargs passed May 8 00:30:10.838838 ignition[759]: Ignition finished successfully May 8 00:30:10.842413 systemd[1]: Starting ignition-disks.service... May 8 00:30:10.848675 ignition[765]: Ignition 2.14.0 May 8 00:30:10.848686 ignition[765]: Stage: disks May 8 00:30:10.848784 ignition[765]: no configs at "/usr/lib/ignition/base.d" May 8 00:30:10.848794 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:30:10.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.850420 systemd[1]: Finished ignition-disks.service. May 8 00:30:10.849668 ignition[765]: disks: disks passed May 8 00:30:10.851638 systemd[1]: Reached target initrd-root-device.target. May 8 00:30:10.849720 ignition[765]: Ignition finished successfully May 8 00:30:10.852804 systemd[1]: Reached target local-fs-pre.target. May 8 00:30:10.853885 systemd[1]: Reached target local-fs.target. May 8 00:30:10.854784 systemd[1]: Reached target sysinit.target. May 8 00:30:10.855999 systemd[1]: Reached target basic.target. May 8 00:30:10.857811 systemd[1]: Starting systemd-fsck-root.service... May 8 00:30:10.868703 systemd-fsck[773]: ROOT: clean, 623/553520 files, 56022/553472 blocks May 8 00:30:10.872012 systemd[1]: Finished systemd-fsck-root.service. May 8 00:30:10.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.873376 systemd[1]: Mounting sysroot.mount... May 8 00:30:10.879042 systemd[1]: Mounted sysroot.mount. May 8 00:30:10.880056 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 8 00:30:10.879696 systemd[1]: Reached target initrd-root-fs.target. May 8 00:30:10.882198 systemd[1]: Mounting sysroot-usr.mount... May 8 00:30:10.882982 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 8 00:30:10.883018 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:30:10.883041 systemd[1]: Reached target ignition-diskful.target. May 8 00:30:10.885182 systemd[1]: Mounted sysroot-usr.mount. May 8 00:30:10.887304 systemd[1]: Starting initrd-setup-root.service... May 8 00:30:10.891516 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:30:10.896182 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory May 8 00:30:10.900344 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:30:10.904453 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:30:10.936129 systemd[1]: Finished initrd-setup-root.service. May 8 00:30:10.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.937567 systemd[1]: Starting ignition-mount.service... May 8 00:30:10.938729 systemd[1]: Starting sysroot-boot.service... May 8 00:30:10.942894 bash[824]: umount: /sysroot/usr/share/oem: not mounted. May 8 00:30:10.950918 ignition[825]: INFO : Ignition 2.14.0 May 8 00:30:10.950918 ignition[825]: INFO : Stage: mount May 8 00:30:10.952976 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:30:10.952976 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:30:10.952976 ignition[825]: INFO : mount: mount passed May 8 00:30:10.952976 ignition[825]: INFO : Ignition finished successfully May 8 00:30:10.955503 systemd[1]: Finished ignition-mount.service. May 8 00:30:10.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:10.965694 systemd[1]: Finished sysroot-boot.service. May 8 00:30:10.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:11.582962 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:30:11.589389 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (834) May 8 00:30:11.589421 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:30:11.589431 kernel: BTRFS info (device vda6): using free space tree May 8 00:30:11.590315 kernel: BTRFS info (device vda6): has skinny extents May 8 00:30:11.592996 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:30:11.594299 systemd[1]: Starting ignition-files.service... May 8 00:30:11.608040 ignition[854]: INFO : Ignition 2.14.0 May 8 00:30:11.608040 ignition[854]: INFO : Stage: files May 8 00:30:11.609380 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:30:11.609380 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:30:11.609380 ignition[854]: DEBUG : files: compiled without relabeling support, skipping May 8 00:30:11.613636 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:30:11.613636 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:30:11.615780 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:30:11.616775 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:30:11.616775 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:30:11.616397 unknown[854]: wrote ssh authorized keys file for user: core May 8 00:30:11.619872 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:30:11.619872 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:30:11.619872 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:30:11.619872 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 00:30:11.673131 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:30:11.815096 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:30:11.817141 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 00:30:12.086922 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:30:12.419852 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:30:12.419852 ignition[854]: INFO : files: op(c): [started] processing unit "containerd.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:30:12.423393 ignition[854]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:30:12.423393 ignition[854]: INFO : files: op(c): [finished] processing unit "containerd.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:30:12.423393 ignition[854]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:30:12.469164 ignition[854]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:30:12.470604 ignition[854]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:30:12.470604 ignition[854]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:30:12.473218 ignition[854]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:30:12.473218 ignition[854]: INFO : files: files passed May 8 00:30:12.473218 ignition[854]: INFO : Ignition finished successfully May 8 00:30:12.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.474873 systemd[1]: Finished ignition-files.service. May 8 00:30:12.476553 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 8 00:30:12.477777 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 8 00:30:12.481559 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 8 00:30:12.478464 systemd[1]: Starting ignition-quench.service... May 8 00:30:12.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.481801 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:30:12.485258 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:30:12.481894 systemd[1]: Finished ignition-quench.service. May 8 00:30:12.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.485729 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 8 00:30:12.487379 systemd[1]: Reached target ignition-complete.target. May 8 00:30:12.489238 systemd[1]: Starting initrd-parse-etc.service... May 8 00:30:12.502429 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:30:12.502531 systemd[1]: Finished initrd-parse-etc.service. May 8 00:30:12.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.504056 systemd[1]: Reached target initrd-fs.target. May 8 00:30:12.505164 systemd[1]: Reached target initrd.target. May 8 00:30:12.506233 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 8 00:30:12.507023 systemd[1]: Starting dracut-pre-pivot.service... May 8 00:30:12.517677 systemd[1]: Finished dracut-pre-pivot.service. May 8 00:30:12.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.519197 systemd[1]: Starting initrd-cleanup.service... May 8 00:30:12.527461 systemd[1]: Stopped target nss-lookup.target. May 8 00:30:12.528402 systemd[1]: Stopped target remote-cryptsetup.target. May 8 00:30:12.529639 systemd[1]: Stopped target timers.target. May 8 00:30:12.530823 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:30:12.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.530937 systemd[1]: Stopped dracut-pre-pivot.service. May 8 00:30:12.532288 systemd[1]: Stopped target initrd.target. May 8 00:30:12.533423 systemd[1]: Stopped target basic.target. May 8 00:30:12.534481 systemd[1]: Stopped target ignition-complete.target. May 8 00:30:12.535593 systemd[1]: Stopped target ignition-diskful.target. May 8 00:30:12.536687 systemd[1]: Stopped target initrd-root-device.target. May 8 00:30:12.537875 systemd[1]: Stopped target remote-fs.target. May 8 00:30:12.539068 systemd[1]: Stopped target remote-fs-pre.target. May 8 00:30:12.540252 systemd[1]: Stopped target sysinit.target. May 8 00:30:12.541326 systemd[1]: Stopped target local-fs.target. May 8 00:30:12.542427 systemd[1]: Stopped target local-fs-pre.target. May 8 00:30:12.543490 systemd[1]: Stopped target swap.target. May 8 00:30:12.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.544457 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:30:12.544566 systemd[1]: Stopped dracut-pre-mount.service. May 8 00:30:12.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.545631 systemd[1]: Stopped target cryptsetup.target. May 8 00:30:12.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.546501 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:30:12.546593 systemd[1]: Stopped dracut-initqueue.service. May 8 00:30:12.547719 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:30:12.547812 systemd[1]: Stopped ignition-fetch-offline.service. May 8 00:30:12.548779 systemd[1]: Stopped target paths.target. May 8 00:30:12.549653 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:30:12.553316 systemd[1]: Stopped systemd-ask-password-console.path. May 8 00:30:12.554110 systemd[1]: Stopped target slices.target. May 8 00:30:12.555118 systemd[1]: Stopped target sockets.target. May 8 00:30:12.556049 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:30:12.556117 systemd[1]: Closed iscsid.socket. May 8 00:30:12.556994 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:30:12.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.557056 systemd[1]: Closed iscsiuio.socket. May 8 00:30:12.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.557946 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:30:12.558039 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 8 00:30:12.559174 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:30:12.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.559266 systemd[1]: Stopped ignition-files.service. May 8 00:30:12.561018 systemd[1]: Stopping ignition-mount.service... May 8 00:30:12.561797 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:30:12.561909 systemd[1]: Stopped kmod-static-nodes.service. May 8 00:30:12.563715 systemd[1]: Stopping sysroot-boot.service... May 8 00:30:12.564654 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:30:12.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.564777 systemd[1]: Stopped systemd-udev-trigger.service. May 8 00:30:12.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.570352 ignition[895]: INFO : Ignition 2.14.0 May 8 00:30:12.570352 ignition[895]: INFO : Stage: umount May 8 00:30:12.570352 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:30:12.570352 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:30:12.570352 ignition[895]: INFO : umount: umount passed May 8 00:30:12.570352 ignition[895]: INFO : Ignition finished successfully May 8 00:30:12.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.567852 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:30:12.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.567946 systemd[1]: Stopped dracut-pre-trigger.service. May 8 00:30:12.570997 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:30:12.571085 systemd[1]: Stopped ignition-mount.service. May 8 00:30:12.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.576785 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:30:12.577288 systemd[1]: Stopped target network.target. May 8 00:30:12.579315 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:30:12.579364 systemd[1]: Stopped ignition-disks.service. May 8 00:30:12.580926 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:30:12.580963 systemd[1]: Stopped ignition-kargs.service. May 8 00:30:12.582120 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:30:12.582157 systemd[1]: Stopped ignition-setup.service. May 8 00:30:12.584168 systemd[1]: Stopping systemd-networkd.service... May 8 00:30:12.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.585216 systemd[1]: Stopping systemd-resolved.service... May 8 00:30:12.586619 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:30:12.586704 systemd[1]: Finished initrd-cleanup.service. May 8 00:30:12.592321 systemd-networkd[738]: eth0: DHCPv6 lease lost May 8 00:30:12.593486 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:30:12.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.602000 audit: BPF prog-id=9 op=UNLOAD May 8 00:30:12.593579 systemd[1]: Stopped systemd-networkd.service. May 8 00:30:12.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.595285 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:30:12.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.595316 systemd[1]: Closed systemd-networkd.socket. May 8 00:30:12.599563 systemd[1]: Stopping network-cleanup.service... May 8 00:30:12.600573 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:30:12.600627 systemd[1]: Stopped parse-ip-for-networkd.service. May 8 00:30:12.601808 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:30:12.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.601847 systemd[1]: Stopped systemd-sysctl.service. May 8 00:30:12.603633 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:30:12.603682 systemd[1]: Stopped systemd-modules-load.service. May 8 00:30:12.604408 systemd[1]: Stopping systemd-udevd.service... May 8 00:30:12.613000 audit: BPF prog-id=6 op=UNLOAD May 8 00:30:12.608070 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:30:12.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.608729 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:30:12.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.608835 systemd[1]: Stopped systemd-resolved.service. May 8 00:30:12.613119 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:30:12.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.613226 systemd[1]: Stopped network-cleanup.service. May 8 00:30:12.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.615455 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:30:12.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.615563 systemd[1]: Stopped systemd-udevd.service. May 8 00:30:12.616998 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:30:12.617031 systemd[1]: Closed systemd-udevd-control.socket. May 8 00:30:12.617949 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:30:12.617983 systemd[1]: Closed systemd-udevd-kernel.socket. May 8 00:30:12.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.619092 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:30:12.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:12.619133 systemd[1]: Stopped dracut-pre-udev.service. May 8 00:30:12.620219 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:30:12.620257 systemd[1]: Stopped dracut-cmdline.service. May 8 00:30:12.621347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:30:12.621379 systemd[1]: Stopped dracut-cmdline-ask.service. May 8 00:30:12.623045 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 8 00:30:12.623989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:30:12.624035 systemd[1]: Stopped systemd-vconsole-setup.service. May 8 00:30:12.625841 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:30:12.625931 systemd[1]: Stopped sysroot-boot.service. May 8 00:30:12.626734 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:30:12.637000 audit: BPF prog-id=8 op=UNLOAD May 8 00:30:12.637000 audit: BPF prog-id=7 op=UNLOAD May 8 00:30:12.626770 systemd[1]: Stopped initrd-setup-root.service. May 8 00:30:12.628204 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:30:12.639000 audit: BPF prog-id=5 op=UNLOAD May 8 00:30:12.639000 audit: BPF prog-id=4 op=UNLOAD May 8 00:30:12.639000 audit: BPF prog-id=3 op=UNLOAD May 8 00:30:12.628328 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 8 00:30:12.629207 systemd[1]: Reached target initrd-switch-root.target. May 8 00:30:12.631196 systemd[1]: Starting initrd-switch-root.service... May 8 00:30:12.636930 systemd[1]: Switching root. May 8 00:30:12.655325 iscsid[744]: iscsid shutting down. May 8 00:30:12.655907 systemd-journald[291]: Journal stopped May 8 00:30:14.697969 systemd-journald[291]: Received SIGTERM from PID 1 (n/a). May 8 00:30:14.698085 kernel: SELinux: Class mctp_socket not defined in policy. May 8 00:30:14.698105 kernel: SELinux: Class anon_inode not defined in policy. May 8 00:30:14.698119 kernel: SELinux: the above unknown classes and permissions will be allowed May 8 00:30:14.698129 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:30:14.698139 kernel: SELinux: policy capability open_perms=1 May 8 00:30:14.698148 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:30:14.698157 kernel: SELinux: policy capability always_check_network=0 May 8 00:30:14.698167 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:30:14.698176 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:30:14.698186 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:30:14.698196 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:30:14.698206 systemd[1]: Successfully loaded SELinux policy in 37.753ms. May 8 00:30:14.698225 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.886ms. May 8 00:30:14.698238 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:30:14.698249 systemd[1]: Detected virtualization kvm. May 8 00:30:14.698262 systemd[1]: Detected architecture arm64. May 8 00:30:14.698299 systemd[1]: Detected first boot. May 8 00:30:14.698311 systemd[1]: Initializing machine ID from VM UUID. May 8 00:30:14.698323 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 8 00:30:14.698333 systemd[1]: Populated /etc with preset unit settings. May 8 00:30:14.698344 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:30:14.698355 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:30:14.698403 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:30:14.698415 systemd[1]: Queued start job for default target multi-user.target. May 8 00:30:14.698425 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 8 00:30:14.698438 systemd[1]: Created slice system-addon\x2dconfig.slice. May 8 00:30:14.698449 systemd[1]: Created slice system-addon\x2drun.slice. May 8 00:30:14.698459 systemd[1]: Created slice system-getty.slice. May 8 00:30:14.698469 systemd[1]: Created slice system-modprobe.slice. May 8 00:30:14.698479 systemd[1]: Created slice system-serial\x2dgetty.slice. May 8 00:30:14.698490 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 8 00:30:14.698504 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 8 00:30:14.698515 systemd[1]: Created slice user.slice. May 8 00:30:14.698526 systemd[1]: Started systemd-ask-password-console.path. May 8 00:30:14.698536 systemd[1]: Started systemd-ask-password-wall.path. May 8 00:30:14.698546 systemd[1]: Set up automount boot.automount. May 8 00:30:14.698560 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 8 00:30:14.698570 systemd[1]: Reached target integritysetup.target. May 8 00:30:14.698580 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:30:14.698590 systemd[1]: Reached target remote-fs.target. May 8 00:30:14.698601 systemd[1]: Reached target slices.target. May 8 00:30:14.698612 systemd[1]: Reached target swap.target. May 8 00:30:14.698622 systemd[1]: Reached target torcx.target. May 8 00:30:14.698632 systemd[1]: Reached target veritysetup.target. May 8 00:30:14.698642 systemd[1]: Listening on systemd-coredump.socket. May 8 00:30:14.698658 systemd[1]: Listening on systemd-initctl.socket. May 8 00:30:14.698676 kernel: kauditd_printk_skb: 77 callbacks suppressed May 8 00:30:14.698694 kernel: audit: type=1400 audit(1746664214.627:81): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:30:14.698707 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:30:14.698719 kernel: audit: type=1335 audit(1746664214.627:82): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 8 00:30:14.698751 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:30:14.698765 systemd[1]: Listening on systemd-journald.socket. May 8 00:30:14.698775 systemd[1]: Listening on systemd-networkd.socket. May 8 00:30:14.698786 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:30:14.698797 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:30:14.698808 systemd[1]: Listening on systemd-userdbd.socket. May 8 00:30:14.698820 systemd[1]: Mounting dev-hugepages.mount... May 8 00:30:14.698830 systemd[1]: Mounting dev-mqueue.mount... May 8 00:30:14.698842 systemd[1]: Mounting media.mount... May 8 00:30:14.698853 systemd[1]: Mounting sys-kernel-debug.mount... May 8 00:30:14.698863 systemd[1]: Mounting sys-kernel-tracing.mount... May 8 00:30:14.698873 systemd[1]: Mounting tmp.mount... May 8 00:30:14.698883 systemd[1]: Starting flatcar-tmpfiles.service... May 8 00:30:14.698893 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:30:14.698904 systemd[1]: Starting kmod-static-nodes.service... May 8 00:30:14.698915 systemd[1]: Starting modprobe@configfs.service... May 8 00:30:14.698925 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:30:14.698936 systemd[1]: Starting modprobe@drm.service... May 8 00:30:14.698947 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:30:14.698957 systemd[1]: Starting modprobe@fuse.service... May 8 00:30:14.698967 systemd[1]: Starting modprobe@loop.service... May 8 00:30:14.699056 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:30:14.699074 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 00:30:14.699086 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 8 00:30:14.699097 systemd[1]: Starting systemd-journald.service... May 8 00:30:14.699107 systemd[1]: Starting systemd-modules-load.service... May 8 00:30:14.699120 systemd[1]: Starting systemd-network-generator.service... May 8 00:30:14.699130 systemd[1]: Starting systemd-remount-fs.service... May 8 00:30:14.699140 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:30:14.699150 systemd[1]: Mounted dev-hugepages.mount. May 8 00:30:14.699160 systemd[1]: Mounted dev-mqueue.mount. May 8 00:30:14.699171 systemd[1]: Mounted media.mount. May 8 00:30:14.699181 kernel: audit: type=1305 audit(1746664214.696:83): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:30:14.699191 kernel: audit: type=1300 audit(1746664214.696:83): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc104a120 a2=4000 a3=1 items=0 ppid=1 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:14.699203 kernel: audit: type=1327 audit(1746664214.696:83): proctitle="/usr/lib/systemd/systemd-journald" May 8 00:30:14.699215 systemd-journald[1021]: Journal started May 8 00:30:14.699259 systemd-journald[1021]: Runtime Journal (/run/log/journal/400686d2215d4f76aac7c8490d926916) is 6.0M, max 48.7M, 42.6M free. May 8 00:30:14.627000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 8 00:30:14.696000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:30:14.696000 audit[1021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc104a120 a2=4000 a3=1 items=0 ppid=1 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:14.696000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 8 00:30:14.704300 kernel: loop: module loaded May 8 00:30:14.705294 kernel: fuse: init (API version 7.34) May 8 00:30:14.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.709309 systemd[1]: Started systemd-journald.service. May 8 00:30:14.709335 kernel: audit: type=1130 audit(1746664214.708:84): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.709838 systemd[1]: Mounted sys-kernel-debug.mount. May 8 00:30:14.711960 systemd[1]: Mounted sys-kernel-tracing.mount. May 8 00:30:14.712795 systemd[1]: Mounted tmp.mount. May 8 00:30:14.713558 systemd[1]: Finished kmod-static-nodes.service. May 8 00:30:14.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.714535 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:30:14.714839 systemd[1]: Finished modprobe@configfs.service. May 8 00:30:14.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.717342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:30:14.717484 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:30:14.719646 kernel: audit: type=1130 audit(1746664214.714:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.719694 kernel: audit: type=1130 audit(1746664214.716:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.719710 kernel: audit: type=1131 audit(1746664214.716:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.722861 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:30:14.723059 systemd[1]: Finished modprobe@drm.service. May 8 00:30:14.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.725385 kernel: audit: type=1130 audit(1746664214.722:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.726093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:30:14.726294 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:30:14.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.727159 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:30:14.727341 systemd[1]: Finished modprobe@fuse.service. May 8 00:30:14.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.728160 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:30:14.728583 systemd[1]: Finished modprobe@loop.service. May 8 00:30:14.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.729714 systemd[1]: Finished flatcar-tmpfiles.service. May 8 00:30:14.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.730930 systemd[1]: Finished systemd-modules-load.service. May 8 00:30:14.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.732136 systemd[1]: Finished systemd-network-generator.service. May 8 00:30:14.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.733441 systemd[1]: Finished systemd-remount-fs.service. May 8 00:30:14.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.734430 systemd[1]: Reached target network-pre.target. May 8 00:30:14.736180 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 8 00:30:14.737881 systemd[1]: Mounting sys-kernel-config.mount... May 8 00:30:14.738474 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:30:14.740341 systemd[1]: Starting systemd-hwdb-update.service... May 8 00:30:14.742135 systemd[1]: Starting systemd-journal-flush.service... May 8 00:30:14.743233 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:30:14.744341 systemd[1]: Starting systemd-random-seed.service... May 8 00:30:14.745152 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:30:14.746493 systemd[1]: Starting systemd-sysctl.service... May 8 00:30:14.748397 systemd[1]: Starting systemd-sysusers.service... May 8 00:30:14.750889 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 8 00:30:14.751638 systemd[1]: Mounted sys-kernel-config.mount. May 8 00:30:14.754525 systemd-journald[1021]: Time spent on flushing to /var/log/journal/400686d2215d4f76aac7c8490d926916 is 16.191ms for 925 entries. May 8 00:30:14.754525 systemd-journald[1021]: System Journal (/var/log/journal/400686d2215d4f76aac7c8490d926916) is 8.0M, max 195.6M, 187.6M free. May 8 00:30:14.783886 systemd-journald[1021]: Received client request to flush runtime journal. May 8 00:30:14.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.762172 systemd[1]: Finished systemd-random-seed.service. May 8 00:30:14.763114 systemd[1]: Reached target first-boot-complete.target. May 8 00:30:14.768509 systemd[1]: Finished systemd-sysctl.service. May 8 00:30:14.776672 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:30:14.778765 systemd[1]: Starting systemd-udev-settle.service... May 8 00:30:14.781285 systemd[1]: Finished systemd-sysusers.service. May 8 00:30:14.783043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:30:14.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.785791 systemd[1]: Finished systemd-journal-flush.service. May 8 00:30:14.788497 udevadm[1081]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:30:14.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:14.810398 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:30:15.133005 systemd[1]: Finished systemd-hwdb-update.service. May 8 00:30:15.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.135104 systemd[1]: Starting systemd-udevd.service... May 8 00:30:15.156539 systemd-udevd[1088]: Using default interface naming scheme 'v252'. May 8 00:30:15.170439 systemd[1]: Started systemd-udevd.service. May 8 00:30:15.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.172679 systemd[1]: Starting systemd-networkd.service... May 8 00:30:15.185846 systemd[1]: Starting systemd-userdbd.service... May 8 00:30:15.197894 systemd[1]: Found device dev-ttyAMA0.device. May 8 00:30:15.223013 systemd[1]: Started systemd-userdbd.service. May 8 00:30:15.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.235443 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:30:15.282642 systemd-networkd[1096]: lo: Link UP May 8 00:30:15.282662 systemd-networkd[1096]: lo: Gained carrier May 8 00:30:15.283024 systemd-networkd[1096]: Enumeration completed May 8 00:30:15.283141 systemd[1]: Started systemd-networkd.service. May 8 00:30:15.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.284004 systemd-networkd[1096]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:30:15.285137 systemd-networkd[1096]: eth0: Link UP May 8 00:30:15.285149 systemd-networkd[1096]: eth0: Gained carrier May 8 00:30:15.285752 systemd[1]: Finished systemd-udev-settle.service. May 8 00:30:15.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.287752 systemd[1]: Starting lvm2-activation-early.service... May 8 00:30:15.296206 lvm[1122]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:30:15.318516 systemd-networkd[1096]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:30:15.328364 systemd[1]: Finished lvm2-activation-early.service. May 8 00:30:15.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.329213 systemd[1]: Reached target cryptsetup.target. May 8 00:30:15.331119 systemd[1]: Starting lvm2-activation.service... May 8 00:30:15.334423 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:30:15.363214 systemd[1]: Finished lvm2-activation.service. May 8 00:30:15.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.364073 systemd[1]: Reached target local-fs-pre.target. May 8 00:30:15.364839 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:30:15.364867 systemd[1]: Reached target local-fs.target. May 8 00:30:15.365510 systemd[1]: Reached target machines.target. May 8 00:30:15.367550 systemd[1]: Starting ldconfig.service... May 8 00:30:15.368422 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:30:15.368495 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:30:15.369733 systemd[1]: Starting systemd-boot-update.service... May 8 00:30:15.371752 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 8 00:30:15.374329 systemd[1]: Starting systemd-machine-id-commit.service... May 8 00:30:15.376928 systemd[1]: Starting systemd-sysext.service... May 8 00:30:15.384109 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1127 (bootctl) May 8 00:30:15.388991 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 8 00:30:15.392622 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 8 00:30:15.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.396236 systemd[1]: Unmounting usr-share-oem.mount... May 8 00:30:15.400565 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 8 00:30:15.400910 systemd[1]: Unmounted usr-share-oem.mount. May 8 00:30:15.453302 kernel: loop0: detected capacity change from 0 to 194096 May 8 00:30:15.456871 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:30:15.457980 systemd[1]: Finished systemd-machine-id-commit.service. May 8 00:30:15.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.471364 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:30:15.473018 systemd-fsck[1139]: fsck.fat 4.2 (2021-01-31) May 8 00:30:15.473018 systemd-fsck[1139]: /dev/vda1: 236 files, 117182/258078 clusters May 8 00:30:15.476442 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 8 00:30:15.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.478829 systemd[1]: Mounting boot.mount... May 8 00:30:15.488701 systemd[1]: Mounted boot.mount. May 8 00:30:15.490289 kernel: loop1: detected capacity change from 0 to 194096 May 8 00:30:15.494916 (sd-sysext)[1147]: Using extensions 'kubernetes'. May 8 00:30:15.495235 (sd-sysext)[1147]: Merged extensions into '/usr'. May 8 00:30:15.498487 systemd[1]: Finished systemd-boot-update.service. May 8 00:30:15.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.512311 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:30:15.513670 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:30:15.515588 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:30:15.517623 systemd[1]: Starting modprobe@loop.service... May 8 00:30:15.518467 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:30:15.518622 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:30:15.519373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:30:15.519531 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:30:15.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.520899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:30:15.521037 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:30:15.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.522282 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:30:15.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.524386 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:30:15.524542 systemd[1]: Finished modprobe@loop.service. May 8 00:30:15.525712 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:30:15.569357 ldconfig[1126]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:30:15.572749 systemd[1]: Finished ldconfig.service. May 8 00:30:15.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.695945 systemd[1]: Mounting usr-share-oem.mount... May 8 00:30:15.701160 systemd[1]: Mounted usr-share-oem.mount. May 8 00:30:15.702840 systemd[1]: Finished systemd-sysext.service. May 8 00:30:15.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.704781 systemd[1]: Starting ensure-sysext.service... May 8 00:30:15.706358 systemd[1]: Starting systemd-tmpfiles-setup.service... May 8 00:30:15.710774 systemd[1]: Reloading. May 8 00:30:15.715243 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 8 00:30:15.715984 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:30:15.717307 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:30:15.746603 /usr/lib/systemd/system-generators/torcx-generator[1183]: time="2025-05-08T00:30:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:30:15.746633 /usr/lib/systemd/system-generators/torcx-generator[1183]: time="2025-05-08T00:30:15Z" level=info msg="torcx already run" May 8 00:30:15.809023 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:30:15.809041 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:30:15.824721 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:30:15.871192 systemd[1]: Finished systemd-tmpfiles-setup.service. May 8 00:30:15.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.875024 systemd[1]: Starting audit-rules.service... May 8 00:30:15.876797 systemd[1]: Starting clean-ca-certificates.service... May 8 00:30:15.878668 systemd[1]: Starting systemd-journal-catalog-update.service... May 8 00:30:15.881072 systemd[1]: Starting systemd-resolved.service... May 8 00:30:15.883804 systemd[1]: Starting systemd-timesyncd.service... May 8 00:30:15.885827 systemd[1]: Starting systemd-update-utmp.service... May 8 00:30:15.887236 systemd[1]: Finished clean-ca-certificates.service. May 8 00:30:15.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.890000 audit[1236]: SYSTEM_BOOT pid=1236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 8 00:30:15.894965 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:30:15.897001 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:30:15.898821 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:30:15.901563 systemd[1]: Starting modprobe@loop.service... May 8 00:30:15.902359 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:30:15.902492 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:30:15.902610 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:30:15.903746 systemd[1]: Finished systemd-update-utmp.service. May 8 00:30:15.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.905002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:30:15.905157 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:30:15.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.906365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:30:15.906519 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:30:15.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.909903 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:30:15.910075 systemd[1]: Finished modprobe@loop.service. May 8 00:30:15.911557 systemd[1]: Finished systemd-journal-catalog-update.service. May 8 00:30:15.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.913022 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:30:15.914479 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:30:15.916486 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:30:15.917143 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:30:15.917305 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:30:15.919117 systemd[1]: Starting systemd-update-done.service... May 8 00:30:15.920062 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:30:15.921109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:30:15.921296 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:30:15.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.922412 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:30:15.925014 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:30:15.926473 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:30:15.928507 systemd[1]: Starting modprobe@drm.service... May 8 00:30:15.930409 systemd[1]: Starting modprobe@loop.service... May 8 00:30:15.931104 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:30:15.931224 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:30:15.933054 systemd[1]: Starting systemd-networkd-wait-online.service... May 8 00:30:15.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.933914 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:30:15.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.934898 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:30:15.935109 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:30:15.936399 systemd[1]: Finished systemd-update-done.service. May 8 00:30:15.937514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:30:15.937680 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:30:15.938699 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:30:15.938907 systemd[1]: Finished modprobe@drm.service. May 8 00:30:15.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.940136 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:30:15.940335 systemd[1]: Finished modprobe@loop.service. May 8 00:30:15.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.941845 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:30:15.941947 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:30:15.946763 systemd[1]: Finished ensure-sysext.service. May 8 00:30:15.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:15.955000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 8 00:30:15.955000 audit[1274]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd7aab1b0 a2=420 a3=0 items=0 ppid=1229 pid=1274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:15.955000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 8 00:30:15.956086 augenrules[1274]: No rules May 8 00:30:15.956754 systemd[1]: Finished audit-rules.service. May 8 00:30:15.976237 systemd-timesyncd[1235]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:30:15.976581 systemd-timesyncd[1235]: Initial clock synchronization to Thu 2025-05-08 00:30:16.063039 UTC. May 8 00:30:15.976850 systemd[1]: Started systemd-timesyncd.service. May 8 00:30:15.977944 systemd[1]: Reached target time-set.target. May 8 00:30:15.985942 systemd-resolved[1234]: Positive Trust Anchors: May 8 00:30:15.985953 systemd-resolved[1234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:30:15.985980 systemd-resolved[1234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:30:15.999567 systemd-resolved[1234]: Defaulting to hostname 'linux'. May 8 00:30:16.000976 systemd[1]: Started systemd-resolved.service. May 8 00:30:16.001723 systemd[1]: Reached target network.target. May 8 00:30:16.002329 systemd[1]: Reached target nss-lookup.target. May 8 00:30:16.002910 systemd[1]: Reached target sysinit.target. May 8 00:30:16.003552 systemd[1]: Started motdgen.path. May 8 00:30:16.004084 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 8 00:30:16.005056 systemd[1]: Started logrotate.timer. May 8 00:30:16.005711 systemd[1]: Started mdadm.timer. May 8 00:30:16.006202 systemd[1]: Started systemd-tmpfiles-clean.timer. May 8 00:30:16.006863 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:30:16.006888 systemd[1]: Reached target paths.target. May 8 00:30:16.007446 systemd[1]: Reached target timers.target. May 8 00:30:16.008342 systemd[1]: Listening on dbus.socket. May 8 00:30:16.010105 systemd[1]: Starting docker.socket... May 8 00:30:16.011967 systemd[1]: Listening on sshd.socket. May 8 00:30:16.012945 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:30:16.013343 systemd[1]: Listening on docker.socket. May 8 00:30:16.014112 systemd[1]: Reached target sockets.target. May 8 00:30:16.014750 systemd[1]: Reached target basic.target. May 8 00:30:16.015508 systemd[1]: System is tainted: cgroupsv1 May 8 00:30:16.015563 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:30:16.015587 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:30:16.016767 systemd[1]: Starting containerd.service... May 8 00:30:16.018695 systemd[1]: Starting dbus.service... May 8 00:30:16.020393 systemd[1]: Starting enable-oem-cloudinit.service... May 8 00:30:16.022095 systemd[1]: Starting extend-filesystems.service... May 8 00:30:16.022825 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 8 00:30:16.024169 systemd[1]: Starting motdgen.service... May 8 00:30:16.026085 systemd[1]: Starting prepare-helm.service... May 8 00:30:16.027960 systemd[1]: Starting ssh-key-proc-cmdline.service... May 8 00:30:16.030374 jq[1286]: false May 8 00:30:16.029989 systemd[1]: Starting sshd-keygen.service... May 8 00:30:16.032423 systemd[1]: Starting systemd-logind.service... May 8 00:30:16.033705 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:30:16.033773 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:30:16.035389 systemd[1]: Starting update-engine.service... May 8 00:30:16.037177 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 8 00:30:16.039728 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:30:16.041478 jq[1301]: true May 8 00:30:16.039973 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 8 00:30:16.050528 extend-filesystems[1287]: Found loop1 May 8 00:30:16.050528 extend-filesystems[1287]: Found vda May 8 00:30:16.050528 extend-filesystems[1287]: Found vda1 May 8 00:30:16.050528 extend-filesystems[1287]: Found vda2 May 8 00:30:16.050528 extend-filesystems[1287]: Found vda3 May 8 00:30:16.050528 extend-filesystems[1287]: Found usr May 8 00:30:16.050528 extend-filesystems[1287]: Found vda4 May 8 00:30:16.050528 extend-filesystems[1287]: Found vda6 May 8 00:30:16.050528 extend-filesystems[1287]: Found vda7 May 8 00:30:16.050528 extend-filesystems[1287]: Found vda9 May 8 00:30:16.050528 extend-filesystems[1287]: Checking size of /dev/vda9 May 8 00:30:16.044161 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:30:16.044415 systemd[1]: Finished ssh-key-proc-cmdline.service. May 8 00:30:16.046105 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:30:16.051107 systemd[1]: Finished motdgen.service. May 8 00:30:16.069821 jq[1311]: true May 8 00:30:16.074662 dbus-daemon[1285]: [system] SELinux support is enabled May 8 00:30:16.074873 systemd[1]: Started dbus.service. May 8 00:30:16.077713 extend-filesystems[1287]: Resized partition /dev/vda9 May 8 00:30:16.078404 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:30:16.078424 systemd[1]: Reached target system-config.target. May 8 00:30:16.079190 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:30:16.079217 systemd[1]: Reached target user-config.target. May 8 00:30:16.090380 tar[1307]: linux-arm64/helm May 8 00:30:16.097089 extend-filesystems[1328]: resize2fs 1.46.5 (30-Dec-2021) May 8 00:30:16.107289 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:30:16.134000 systemd-logind[1297]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:30:16.135164 systemd-logind[1297]: New seat seat0. May 8 00:30:16.149115 systemd[1]: Started systemd-logind.service. May 8 00:30:16.152307 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:30:16.165392 update_engine[1299]: I0508 00:30:16.165059 1299 main.cc:92] Flatcar Update Engine starting May 8 00:30:16.168532 extend-filesystems[1328]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:30:16.168532 extend-filesystems[1328]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:30:16.168532 extend-filesystems[1328]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:30:16.168600 systemd[1]: Started update-engine.service. May 8 00:30:16.172245 systemd[1]: Started locksmithd.service. May 8 00:30:16.172762 update_engine[1299]: I0508 00:30:16.172735 1299 update_check_scheduler.cc:74] Next update check in 2m31s May 8 00:30:16.179371 extend-filesystems[1287]: Resized filesystem in /dev/vda9 May 8 00:30:16.180154 bash[1338]: Updated "/home/core/.ssh/authorized_keys" May 8 00:30:16.175467 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:30:16.175732 systemd[1]: Finished extend-filesystems.service. May 8 00:30:16.179457 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 8 00:30:16.195866 env[1315]: time="2025-05-08T00:30:16.195025974Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 8 00:30:16.225197 env[1315]: time="2025-05-08T00:30:16.225098456Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:30:16.225306 env[1315]: time="2025-05-08T00:30:16.225251378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:30:16.227770 locksmithd[1345]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:30:16.228506 env[1315]: time="2025-05-08T00:30:16.228474930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.180-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:30:16.228568 env[1315]: time="2025-05-08T00:30:16.228507804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:30:16.228791 env[1315]: time="2025-05-08T00:30:16.228766988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:30:16.228820 env[1315]: time="2025-05-08T00:30:16.228792664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:30:16.228820 env[1315]: time="2025-05-08T00:30:16.228806776Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 8 00:30:16.228820 env[1315]: time="2025-05-08T00:30:16.228816197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:30:16.228905 env[1315]: time="2025-05-08T00:30:16.228889990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:30:16.229190 env[1315]: time="2025-05-08T00:30:16.229171251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:30:16.229377 env[1315]: time="2025-05-08T00:30:16.229355268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:30:16.229409 env[1315]: time="2025-05-08T00:30:16.229379529Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:30:16.229457 env[1315]: time="2025-05-08T00:30:16.229441030Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 8 00:30:16.229484 env[1315]: time="2025-05-08T00:30:16.229458619Z" level=info msg="metadata content store policy set" policy=shared May 8 00:30:16.233750 env[1315]: time="2025-05-08T00:30:16.233721456Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:30:16.233876 env[1315]: time="2025-05-08T00:30:16.233858892Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:30:16.233936 env[1315]: time="2025-05-08T00:30:16.233921161Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:30:16.234016 env[1315]: time="2025-05-08T00:30:16.234001383Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:30:16.234085 env[1315]: time="2025-05-08T00:30:16.234064623Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:30:16.234149 env[1315]: time="2025-05-08T00:30:16.234135423Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:30:16.234229 env[1315]: time="2025-05-08T00:30:16.234215564Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:30:16.234664 env[1315]: time="2025-05-08T00:30:16.234635476Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:30:16.234747 env[1315]: time="2025-05-08T00:30:16.234733084Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 8 00:30:16.234804 env[1315]: time="2025-05-08T00:30:16.234791876Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:30:16.234863 env[1315]: time="2025-05-08T00:30:16.234849940Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:30:16.234921 env[1315]: time="2025-05-08T00:30:16.234908408Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:30:16.235090 env[1315]: time="2025-05-08T00:30:16.235070873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:30:16.235344 env[1315]: time="2025-05-08T00:30:16.235322416Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:30:16.235763 env[1315]: time="2025-05-08T00:30:16.235742165Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:30:16.235856 env[1315]: time="2025-05-08T00:30:16.235840057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:30:16.235912 env[1315]: time="2025-05-08T00:30:16.235899455Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:30:16.236093 env[1315]: time="2025-05-08T00:30:16.236076113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236165 env[1315]: time="2025-05-08T00:30:16.236148005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236234 env[1315]: time="2025-05-08T00:30:16.236221111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236313 env[1315]: time="2025-05-08T00:30:16.236299149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236370 env[1315]: time="2025-05-08T00:30:16.236357779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236425 env[1315]: time="2025-05-08T00:30:16.236411719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236482 env[1315]: time="2025-05-08T00:30:16.236469338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236537 env[1315]: time="2025-05-08T00:30:16.236524975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236622 env[1315]: time="2025-05-08T00:30:16.236606936Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:30:16.236808 env[1315]: time="2025-05-08T00:30:16.236787880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236881 env[1315]: time="2025-05-08T00:30:16.236865797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236936 env[1315]: time="2025-05-08T00:30:16.236923739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:30:16.236992 env[1315]: time="2025-05-08T00:30:16.236979822Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:30:16.237062 env[1315]: time="2025-05-08T00:30:16.237046013Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 8 00:30:16.237117 env[1315]: time="2025-05-08T00:30:16.237104036Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:30:16.237182 env[1315]: time="2025-05-08T00:30:16.237167599Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 8 00:30:16.237278 env[1315]: time="2025-05-08T00:30:16.237263752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:30:16.237589 env[1315]: time="2025-05-08T00:30:16.237532277Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:30:16.238240 env[1315]: time="2025-05-08T00:30:16.237939816Z" level=info msg="Connect containerd service" May 8 00:30:16.238364 env[1315]: time="2025-05-08T00:30:16.238343553Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:30:16.239099 env[1315]: time="2025-05-08T00:30:16.239068259Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:30:16.239521 env[1315]: time="2025-05-08T00:30:16.239442277Z" level=info msg="Start subscribing containerd event" May 8 00:30:16.239521 env[1315]: time="2025-05-08T00:30:16.239500139Z" level=info msg="Start recovering state" May 8 00:30:16.239610 env[1315]: time="2025-05-08T00:30:16.239564834Z" level=info msg="Start event monitor" May 8 00:30:16.239610 env[1315]: time="2025-05-08T00:30:16.239585819Z" level=info msg="Start snapshots syncer" May 8 00:30:16.239610 env[1315]: time="2025-05-08T00:30:16.239596413Z" level=info msg="Start cni network conf syncer for default" May 8 00:30:16.239610 env[1315]: time="2025-05-08T00:30:16.239603813Z" level=info msg="Start streaming server" May 8 00:30:16.239789 env[1315]: time="2025-05-08T00:30:16.239767006Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:30:16.239890 env[1315]: time="2025-05-08T00:30:16.239876138Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:30:16.239998 env[1315]: time="2025-05-08T00:30:16.239983694Z" level=info msg="containerd successfully booted in 0.046157s" May 8 00:30:16.240096 systemd[1]: Started containerd.service. May 8 00:30:16.494511 systemd-networkd[1096]: eth0: Gained IPv6LL May 8 00:30:16.496580 systemd[1]: Finished systemd-networkd-wait-online.service. May 8 00:30:16.497602 systemd[1]: Reached target network-online.target. May 8 00:30:16.499755 systemd[1]: Starting kubelet.service... May 8 00:30:16.523466 tar[1307]: linux-arm64/LICENSE May 8 00:30:16.523566 tar[1307]: linux-arm64/README.md May 8 00:30:16.528537 systemd[1]: Finished prepare-helm.service. May 8 00:30:17.031403 systemd[1]: Started kubelet.service. May 8 00:30:17.389845 sshd_keygen[1317]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:30:17.406809 systemd[1]: Finished sshd-keygen.service. May 8 00:30:17.409141 systemd[1]: Starting issuegen.service... May 8 00:30:17.414081 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:30:17.414336 systemd[1]: Finished issuegen.service. May 8 00:30:17.416371 systemd[1]: Starting systemd-user-sessions.service... May 8 00:30:17.424420 systemd[1]: Finished systemd-user-sessions.service. May 8 00:30:17.426911 systemd[1]: Started getty@tty1.service. May 8 00:30:17.429153 systemd[1]: Started serial-getty@ttyAMA0.service. May 8 00:30:17.430094 systemd[1]: Reached target getty.target. May 8 00:30:17.430915 systemd[1]: Reached target multi-user.target. May 8 00:30:17.433045 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 8 00:30:17.439812 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 8 00:30:17.440034 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 8 00:30:17.441073 systemd[1]: Startup finished in 4.766s (kernel) + 4.723s (userspace) = 9.490s. May 8 00:30:17.541887 kubelet[1371]: E0508 00:30:17.541830 1371 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:30:17.543475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:30:17.543629 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:30:21.352482 systemd[1]: Created slice system-sshd.slice. May 8 00:30:21.353710 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:45648.service. May 8 00:30:21.407187 sshd[1398]: Accepted publickey for core from 10.0.0.1 port 45648 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:30:21.409092 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:30:21.418806 systemd[1]: Created slice user-500.slice. May 8 00:30:21.419814 systemd[1]: Starting user-runtime-dir@500.service... May 8 00:30:21.421857 systemd-logind[1297]: New session 1 of user core. May 8 00:30:21.428564 systemd[1]: Finished user-runtime-dir@500.service. May 8 00:30:21.429737 systemd[1]: Starting user@500.service... May 8 00:30:21.432639 (systemd)[1403]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:30:21.498050 systemd[1403]: Queued start job for default target default.target. May 8 00:30:21.498260 systemd[1403]: Reached target paths.target. May 8 00:30:21.498274 systemd[1403]: Reached target sockets.target. May 8 00:30:21.498299 systemd[1403]: Reached target timers.target. May 8 00:30:21.498321 systemd[1403]: Reached target basic.target. May 8 00:30:21.498373 systemd[1403]: Reached target default.target. May 8 00:30:21.498396 systemd[1403]: Startup finished in 60ms. May 8 00:30:21.498468 systemd[1]: Started user@500.service. May 8 00:30:21.499403 systemd[1]: Started session-1.scope. May 8 00:30:21.549300 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:45650.service. May 8 00:30:21.592283 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 45650 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:30:21.593511 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:30:21.596969 systemd-logind[1297]: New session 2 of user core. May 8 00:30:21.597754 systemd[1]: Started session-2.scope. May 8 00:30:21.652688 sshd[1413]: pam_unix(sshd:session): session closed for user core May 8 00:30:21.655113 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:45652.service. May 8 00:30:21.655666 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:45650.service: Deactivated successfully. May 8 00:30:21.656548 systemd-logind[1297]: Session 2 logged out. Waiting for processes to exit. May 8 00:30:21.656617 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:30:21.657407 systemd-logind[1297]: Removed session 2. May 8 00:30:21.698191 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:30:21.699372 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:30:21.702652 systemd-logind[1297]: New session 3 of user core. May 8 00:30:21.703399 systemd[1]: Started session-3.scope. May 8 00:30:21.754786 sshd[1418]: pam_unix(sshd:session): session closed for user core May 8 00:30:21.755747 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:45668.service. May 8 00:30:21.757228 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:45652.service: Deactivated successfully. May 8 00:30:21.759507 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:30:21.759911 systemd-logind[1297]: Session 3 logged out. Waiting for processes to exit. May 8 00:30:21.760864 systemd-logind[1297]: Removed session 3. May 8 00:30:21.800537 sshd[1425]: Accepted publickey for core from 10.0.0.1 port 45668 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:30:21.801855 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:30:21.806482 systemd-logind[1297]: New session 4 of user core. May 8 00:30:21.806826 systemd[1]: Started session-4.scope. May 8 00:30:21.862492 sshd[1425]: pam_unix(sshd:session): session closed for user core May 8 00:30:21.863621 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:45684.service. May 8 00:30:21.867111 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:45668.service: Deactivated successfully. May 8 00:30:21.867818 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:30:21.870391 systemd-logind[1297]: Session 4 logged out. Waiting for processes to exit. May 8 00:30:21.871301 systemd-logind[1297]: Removed session 4. May 8 00:30:21.907142 sshd[1432]: Accepted publickey for core from 10.0.0.1 port 45684 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:30:21.908697 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:30:21.912827 systemd-logind[1297]: New session 5 of user core. May 8 00:30:21.913530 systemd[1]: Started session-5.scope. May 8 00:30:21.979084 sudo[1438]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:30:21.979329 sudo[1438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:30:21.988503 dbus-daemon[1285]: avc: received setenforce notice (enforcing=1) May 8 00:30:21.990245 sudo[1438]: pam_unix(sudo:session): session closed for user root May 8 00:30:21.992216 sshd[1432]: pam_unix(sshd:session): session closed for user core May 8 00:30:21.994794 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:45698.service. May 8 00:30:21.995598 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:45684.service: Deactivated successfully. May 8 00:30:21.996675 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:30:21.997549 systemd-logind[1297]: Session 5 logged out. Waiting for processes to exit. May 8 00:30:21.999082 systemd-logind[1297]: Removed session 5. May 8 00:30:22.037951 sshd[1440]: Accepted publickey for core from 10.0.0.1 port 45698 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:30:22.039212 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:30:22.044315 systemd-logind[1297]: New session 6 of user core. May 8 00:30:22.045269 systemd[1]: Started session-6.scope. May 8 00:30:22.101139 sudo[1447]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:30:22.101384 sudo[1447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:30:22.104021 sudo[1447]: pam_unix(sudo:session): session closed for user root May 8 00:30:22.108256 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:30:22.108742 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:30:22.117226 systemd[1]: Stopping audit-rules.service... May 8 00:30:22.118000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 8 00:30:22.118579 auditctl[1450]: No rules May 8 00:30:22.118867 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:30:22.119079 systemd[1]: Stopped audit-rules.service. May 8 00:30:22.120817 kernel: kauditd_printk_skb: 64 callbacks suppressed May 8 00:30:22.120871 kernel: audit: type=1305 audit(1746664222.118:151): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 8 00:30:22.120889 kernel: audit: type=1300 audit(1746664222.118:151): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd6ae9a70 a2=420 a3=0 items=0 ppid=1 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.118000 audit[1450]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd6ae9a70 a2=420 a3=0 items=0 ppid=1 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.120589 systemd[1]: Starting audit-rules.service... May 8 00:30:22.118000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 8 00:30:22.123967 kernel: audit: type=1327 audit(1746664222.118:151): proctitle=2F7362696E2F617564697463746C002D44 May 8 00:30:22.124017 kernel: audit: type=1131 audit(1746664222.118:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:22.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:22.137182 augenrules[1468]: No rules May 8 00:30:22.137893 systemd[1]: Finished audit-rules.service. May 8 00:30:22.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:22.139057 sudo[1446]: pam_unix(sudo:session): session closed for user root May 8 00:30:22.138000 audit[1446]: USER_END pid=1446 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:22.145021 kernel: audit: type=1130 audit(1746664222.137:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:22.145063 kernel: audit: type=1106 audit(1746664222.138:154): pid=1446 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:22.145079 kernel: audit: type=1104 audit(1746664222.138:155): pid=1446 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:22.145094 kernel: audit: type=1106 audit(1746664222.140:156): pid=1440 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:22.138000 audit[1446]: CRED_DISP pid=1446 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:22.140000 audit[1440]: USER_END pid=1440 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:22.142805 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:45708.service. May 8 00:30:22.140497 sshd[1440]: pam_unix(sshd:session): session closed for user core May 8 00:30:22.143233 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:45698.service: Deactivated successfully. May 8 00:30:22.144189 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:30:22.144684 systemd-logind[1297]: Session 6 logged out. Waiting for processes to exit. May 8 00:30:22.146199 systemd-logind[1297]: Removed session 6. May 8 00:30:22.141000 audit[1440]: CRED_DISP pid=1440 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:22.149453 kernel: audit: type=1104 audit(1746664222.141:157): pid=1440 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:22.149490 kernel: audit: type=1130 audit(1746664222.142:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.15:22-10.0.0.1:45708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:22.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.15:22-10.0.0.1:45708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:22.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.15:22-10.0.0.1:45698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:22.186000 audit[1473]: USER_ACCT pid=1473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:22.188503 sshd[1473]: Accepted publickey for core from 10.0.0.1 port 45708 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:30:22.188000 audit[1473]: CRED_ACQ pid=1473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:22.188000 audit[1473]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc092db0 a2=3 a3=1 items=0 ppid=1 pid=1473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.188000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:30:22.190228 sshd[1473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:30:22.193604 systemd-logind[1297]: New session 7 of user core. May 8 00:30:22.194404 systemd[1]: Started session-7.scope. May 8 00:30:22.196000 audit[1473]: USER_START pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:22.198000 audit[1478]: CRED_ACQ pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:22.246000 audit[1479]: USER_ACCT pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:22.247870 sudo[1479]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:30:22.246000 audit[1479]: CRED_REFR pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:22.248412 sudo[1479]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:30:22.248000 audit[1479]: USER_START pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:22.319861 systemd[1]: Starting docker.service... May 8 00:30:22.401333 env[1490]: time="2025-05-08T00:30:22.401266529Z" level=info msg="Starting up" May 8 00:30:22.402798 env[1490]: time="2025-05-08T00:30:22.402769659Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:30:22.402899 env[1490]: time="2025-05-08T00:30:22.402884979Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:30:22.402969 env[1490]: time="2025-05-08T00:30:22.402953149Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:30:22.403021 env[1490]: time="2025-05-08T00:30:22.403008860Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:30:22.405572 env[1490]: time="2025-05-08T00:30:22.405543031Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:30:22.405572 env[1490]: time="2025-05-08T00:30:22.405568354Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:30:22.405691 env[1490]: time="2025-05-08T00:30:22.405586241Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:30:22.405691 env[1490]: time="2025-05-08T00:30:22.405596892Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:30:22.641549 env[1490]: time="2025-05-08T00:30:22.641458576Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 8 00:30:22.641549 env[1490]: time="2025-05-08T00:30:22.641485507Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 8 00:30:22.642163 env[1490]: time="2025-05-08T00:30:22.642125129Z" level=info msg="Loading containers: start." May 8 00:30:22.695000 audit[1524]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.695000 audit[1524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff4f51f40 a2=0 a3=1 items=0 ppid=1490 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.695000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 8 00:30:22.697000 audit[1526]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.697000 audit[1526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc9239460 a2=0 a3=1 items=0 ppid=1490 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.697000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 8 00:30:22.699000 audit[1528]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.699000 audit[1528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe0fb1910 a2=0 a3=1 items=0 ppid=1490 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.699000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 8 00:30:22.701000 audit[1530]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.701000 audit[1530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffee087b90 a2=0 a3=1 items=0 ppid=1490 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.701000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 8 00:30:22.703000 audit[1532]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.703000 audit[1532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffdfc6620 a2=0 a3=1 items=0 ppid=1490 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.703000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 8 00:30:22.728000 audit[1537]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.728000 audit[1537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd5e85360 a2=0 a3=1 items=0 ppid=1490 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.728000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 8 00:30:22.734000 audit[1539]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.734000 audit[1539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffcf51af0 a2=0 a3=1 items=0 ppid=1490 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.734000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 8 00:30:22.736000 audit[1541]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.736000 audit[1541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffea3b7ba0 a2=0 a3=1 items=0 ppid=1490 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.736000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 8 00:30:22.738000 audit[1543]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.738000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffcee6f9a0 a2=0 a3=1 items=0 ppid=1490 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.738000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 8 00:30:22.744000 audit[1547]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.744000 audit[1547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcaf55570 a2=0 a3=1 items=0 ppid=1490 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.744000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 8 00:30:22.754000 audit[1548]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.754000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc3afbae0 a2=0 a3=1 items=0 ppid=1490 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.754000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 8 00:30:22.764297 kernel: Initializing XFRM netlink socket May 8 00:30:22.792177 env[1490]: time="2025-05-08T00:30:22.792130904Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 8 00:30:22.806000 audit[1556]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.806000 audit[1556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffda294f00 a2=0 a3=1 items=0 ppid=1490 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.806000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 8 00:30:22.824000 audit[1559]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.824000 audit[1559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd92071f0 a2=0 a3=1 items=0 ppid=1490 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.824000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 8 00:30:22.827000 audit[1562]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.827000 audit[1562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd508b840 a2=0 a3=1 items=0 ppid=1490 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.827000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 8 00:30:22.829000 audit[1564]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.829000 audit[1564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff779b420 a2=0 a3=1 items=0 ppid=1490 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.829000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 8 00:30:22.831000 audit[1566]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.831000 audit[1566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff6166e60 a2=0 a3=1 items=0 ppid=1490 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.831000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 8 00:30:22.833000 audit[1568]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.833000 audit[1568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe18e44e0 a2=0 a3=1 items=0 ppid=1490 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.833000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 8 00:30:22.835000 audit[1570]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.835000 audit[1570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffc6147f70 a2=0 a3=1 items=0 ppid=1490 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.835000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 8 00:30:22.842000 audit[1573]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.842000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffdb59b8d0 a2=0 a3=1 items=0 ppid=1490 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.842000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 8 00:30:22.844000 audit[1575]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.844000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffeab09d20 a2=0 a3=1 items=0 ppid=1490 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.844000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 8 00:30:22.845000 audit[1577]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.845000 audit[1577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff959daa0 a2=0 a3=1 items=0 ppid=1490 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.845000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 8 00:30:22.847000 audit[1579]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.847000 audit[1579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe3cb8190 a2=0 a3=1 items=0 ppid=1490 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.847000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 8 00:30:22.849691 systemd-networkd[1096]: docker0: Link UP May 8 00:30:22.856000 audit[1583]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.856000 audit[1583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffa385610 a2=0 a3=1 items=0 ppid=1490 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.856000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 8 00:30:22.870000 audit[1584]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:22.870000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffeefd43c0 a2=0 a3=1 items=0 ppid=1490 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:22.870000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 8 00:30:22.871801 env[1490]: time="2025-05-08T00:30:22.871759609Z" level=info msg="Loading containers: done." May 8 00:30:22.889570 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2237568037-merged.mount: Deactivated successfully. May 8 00:30:22.898385 env[1490]: time="2025-05-08T00:30:22.897218653Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:30:22.898385 env[1490]: time="2025-05-08T00:30:22.897429918Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 8 00:30:22.898385 env[1490]: time="2025-05-08T00:30:22.897780177Z" level=info msg="Daemon has completed initialization" May 8 00:30:22.918166 systemd[1]: Started docker.service. May 8 00:30:22.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:22.924538 env[1490]: time="2025-05-08T00:30:22.924427866Z" level=info msg="API listen on /run/docker.sock" May 8 00:30:23.670950 env[1315]: time="2025-05-08T00:30:23.670895732Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:30:24.615137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148461431.mount: Deactivated successfully. May 8 00:30:26.121508 env[1315]: time="2025-05-08T00:30:26.121463206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:26.123356 env[1315]: time="2025-05-08T00:30:26.123321981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:26.125837 env[1315]: time="2025-05-08T00:30:26.125810751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:26.127438 env[1315]: time="2025-05-08T00:30:26.127410910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:26.129155 env[1315]: time="2025-05-08T00:30:26.129113801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 8 00:30:26.139005 env[1315]: time="2025-05-08T00:30:26.138971204Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:30:27.794476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:30:27.794650 systemd[1]: Stopped kubelet.service. May 8 00:30:27.796150 systemd[1]: Starting kubelet.service... May 8 00:30:27.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:27.798775 kernel: kauditd_printk_skb: 84 callbacks suppressed May 8 00:30:27.798851 kernel: audit: type=1130 audit(1746664227.793:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:27.798881 kernel: audit: type=1131 audit(1746664227.793:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:27.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:27.888512 systemd[1]: Started kubelet.service. May 8 00:30:27.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:27.891292 kernel: audit: type=1130 audit(1746664227.887:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:27.936745 kubelet[1643]: E0508 00:30:27.936697 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:30:27.938977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:30:27.939120 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:30:27.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 8 00:30:27.942310 kernel: audit: type=1131 audit(1746664227.938:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 8 00:30:28.247570 env[1315]: time="2025-05-08T00:30:28.247451059Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:28.249146 env[1315]: time="2025-05-08T00:30:28.249119901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:28.251148 env[1315]: time="2025-05-08T00:30:28.251100304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:28.253023 env[1315]: time="2025-05-08T00:30:28.252979846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:28.253805 env[1315]: time="2025-05-08T00:30:28.253766724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 8 00:30:28.263924 env[1315]: time="2025-05-08T00:30:28.263883244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:30:29.567676 env[1315]: time="2025-05-08T00:30:29.567622978Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:29.572615 env[1315]: time="2025-05-08T00:30:29.572566823Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:29.574215 env[1315]: time="2025-05-08T00:30:29.574181908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:29.576325 env[1315]: time="2025-05-08T00:30:29.576299193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:29.577156 env[1315]: time="2025-05-08T00:30:29.577126373Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 8 00:30:29.589899 env[1315]: time="2025-05-08T00:30:29.589856412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:30:30.820554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006970722.mount: Deactivated successfully. May 8 00:30:31.358691 env[1315]: time="2025-05-08T00:30:31.358637555Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:31.359739 env[1315]: time="2025-05-08T00:30:31.359708002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:31.361532 env[1315]: time="2025-05-08T00:30:31.361503789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:31.363066 env[1315]: time="2025-05-08T00:30:31.363034950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:31.363626 env[1315]: time="2025-05-08T00:30:31.363480201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 00:30:31.376980 env[1315]: time="2025-05-08T00:30:31.376946426Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:30:31.947836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1915954829.mount: Deactivated successfully. May 8 00:30:33.116871 env[1315]: time="2025-05-08T00:30:33.116824302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:33.118363 env[1315]: time="2025-05-08T00:30:33.118330390Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:33.121528 env[1315]: time="2025-05-08T00:30:33.120719307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:33.122406 env[1315]: time="2025-05-08T00:30:33.122377606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:33.123353 env[1315]: time="2025-05-08T00:30:33.123323145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 00:30:33.134762 env[1315]: time="2025-05-08T00:30:33.134481210Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:30:33.589120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183585373.mount: Deactivated successfully. May 8 00:30:33.594804 env[1315]: time="2025-05-08T00:30:33.594066780Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:33.595499 env[1315]: time="2025-05-08T00:30:33.595417494Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:33.599778 env[1315]: time="2025-05-08T00:30:33.599261402Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:33.600595 env[1315]: time="2025-05-08T00:30:33.600552048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:33.601259 env[1315]: time="2025-05-08T00:30:33.601218675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 8 00:30:33.616164 env[1315]: time="2025-05-08T00:30:33.616128544Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:30:34.155456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839905513.mount: Deactivated successfully. May 8 00:30:36.754765 env[1315]: time="2025-05-08T00:30:36.754719123Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:36.757332 env[1315]: time="2025-05-08T00:30:36.757285571Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:36.759109 env[1315]: time="2025-05-08T00:30:36.759076956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:36.760865 env[1315]: time="2025-05-08T00:30:36.760833675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:36.762515 env[1315]: time="2025-05-08T00:30:36.762486957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 8 00:30:38.189961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:30:38.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:38.190134 systemd[1]: Stopped kubelet.service. May 8 00:30:38.191686 systemd[1]: Starting kubelet.service... May 8 00:30:38.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:38.194188 kernel: audit: type=1130 audit(1746664238.188:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:38.194268 kernel: audit: type=1131 audit(1746664238.188:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:38.280345 kernel: audit: type=1130 audit(1746664238.274:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:38.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:38.275630 systemd[1]: Started kubelet.service. May 8 00:30:38.316526 kubelet[1768]: E0508 00:30:38.316490 1768 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:30:38.318495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:30:38.318658 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:30:38.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 8 00:30:38.322288 kernel: audit: type=1131 audit(1746664238.318:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 8 00:30:41.905742 systemd[1]: Stopped kubelet.service. May 8 00:30:41.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:41.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:41.908006 systemd[1]: Starting kubelet.service... May 8 00:30:41.909739 kernel: audit: type=1130 audit(1746664241.904:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:41.909804 kernel: audit: type=1131 audit(1746664241.904:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:41.926413 systemd[1]: Reloading. May 8 00:30:41.969143 /usr/lib/systemd/system-generators/torcx-generator[1807]: time="2025-05-08T00:30:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:30:41.969173 /usr/lib/systemd/system-generators/torcx-generator[1807]: time="2025-05-08T00:30:41Z" level=info msg="torcx already run" May 8 00:30:42.052520 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:30:42.052540 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:30:42.069728 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:30:42.130085 systemd[1]: Started kubelet.service. May 8 00:30:42.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:42.133316 kernel: audit: type=1130 audit(1746664242.129:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:42.134031 systemd[1]: Stopping kubelet.service... May 8 00:30:42.135156 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:30:42.135441 systemd[1]: Stopped kubelet.service. May 8 00:30:42.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:42.137081 systemd[1]: Starting kubelet.service... May 8 00:30:42.137339 kernel: audit: type=1131 audit(1746664242.134:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:42.218087 systemd[1]: Started kubelet.service. May 8 00:30:42.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:42.221292 kernel: audit: type=1130 audit(1746664242.217:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:42.257558 kubelet[1869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:30:42.257558 kubelet[1869]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:30:42.257558 kubelet[1869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:30:42.258444 kubelet[1869]: I0508 00:30:42.258409 1869 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:30:43.424654 kubelet[1869]: I0508 00:30:43.424611 1869 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:30:43.424654 kubelet[1869]: I0508 00:30:43.424641 1869 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:30:43.425146 kubelet[1869]: I0508 00:30:43.425110 1869 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:30:43.446415 kubelet[1869]: E0508 00:30:43.446379 1869 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:43.446528 kubelet[1869]: I0508 00:30:43.446439 1869 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:30:43.457683 kubelet[1869]: I0508 00:30:43.457656 1869 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:30:43.459326 kubelet[1869]: I0508 00:30:43.459284 1869 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:30:43.459500 kubelet[1869]: I0508 00:30:43.459330 1869 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:30:43.459637 kubelet[1869]: I0508 00:30:43.459625 1869 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:30:43.459637 kubelet[1869]: I0508 00:30:43.459637 1869 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:30:43.460035 kubelet[1869]: I0508 00:30:43.460007 1869 state_mem.go:36] "Initialized new in-memory state store" May 8 00:30:43.461015 kubelet[1869]: I0508 00:30:43.460997 1869 kubelet.go:400] "Attempting to sync node with API server" May 8 00:30:43.461189 kubelet[1869]: I0508 00:30:43.461079 1869 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:30:43.461297 kubelet[1869]: I0508 00:30:43.461283 1869 kubelet.go:312] "Adding apiserver pod source" May 8 00:30:43.461383 kubelet[1869]: I0508 00:30:43.461365 1869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:30:43.462594 kubelet[1869]: I0508 00:30:43.462569 1869 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:30:43.463210 kubelet[1869]: I0508 00:30:43.463191 1869 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:30:43.463299 kubelet[1869]: W0508 00:30:43.463208 1869 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:43.463299 kubelet[1869]: E0508 00:30:43.463263 1869 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:43.463442 kubelet[1869]: W0508 00:30:43.463400 1869 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:43.463510 kubelet[1869]: W0508 00:30:43.463491 1869 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:30:43.463569 kubelet[1869]: E0508 00:30:43.463553 1869 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:43.464621 kubelet[1869]: I0508 00:30:43.464599 1869 server.go:1264] "Started kubelet" May 8 00:30:43.465062 kubelet[1869]: I0508 00:30:43.465022 1869 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:30:43.466198 kubelet[1869]: I0508 00:30:43.466175 1869 server.go:455] "Adding debug handlers to kubelet server" May 8 00:30:43.467689 kubelet[1869]: I0508 00:30:43.467630 1869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:30:43.467907 kubelet[1869]: I0508 00:30:43.467883 1869 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:30:43.468064 kubelet[1869]: E0508 00:30:43.467845 1869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d65e00d952cbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:30:43.464580285 +0000 UTC m=+1.243321692,LastTimestamp:2025-05-08 00:30:43.464580285 +0000 UTC m=+1.243321692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:30:43.467000 audit[1869]: AVC avc: denied { mac_admin } for pid=1869 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:43.468221 kubelet[1869]: I0508 00:30:43.468167 1869 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 8 00:30:43.468221 kubelet[1869]: I0508 00:30:43.468208 1869 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 8 00:30:43.468297 kubelet[1869]: I0508 00:30:43.468282 1869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:30:43.467000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:43.471534 kernel: audit: type=1400 audit(1746664243.467:206): avc: denied { mac_admin } for pid=1869 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:43.471620 kernel: audit: type=1401 audit(1746664243.467:206): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:43.471644 kernel: audit: type=1300 audit(1746664243.467:206): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b6d320 a1=4000b568d0 a2=4000b6d2f0 a3=25 items=0 ppid=1 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.467000 audit[1869]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b6d320 a1=4000b568d0 a2=4000b6d2f0 a3=25 items=0 ppid=1 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.471749 kubelet[1869]: E0508 00:30:43.471039 1869 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:30:43.471749 kubelet[1869]: I0508 00:30:43.471130 1869 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:30:43.471749 kubelet[1869]: I0508 00:30:43.471221 1869 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:30:43.473383 kubelet[1869]: I0508 00:30:43.473365 1869 reconciler.go:26] "Reconciler: start to sync state" May 8 00:30:43.473598 kubelet[1869]: E0508 00:30:43.473574 1869 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:30:43.473701 kubelet[1869]: W0508 00:30:43.473665 1869 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:43.473753 kubelet[1869]: E0508 00:30:43.473715 1869 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:43.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:43.474099 kubelet[1869]: I0508 00:30:43.474071 1869 factory.go:221] Registration of the systemd container factory successfully May 8 00:30:43.474235 kubelet[1869]: E0508 00:30:43.474130 1869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" May 8 00:30:43.474235 kubelet[1869]: I0508 00:30:43.474154 1869 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:30:43.476227 kernel: audit: type=1327 audit(1746664243.467:206): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:43.476319 kernel: audit: type=1400 audit(1746664243.467:207): avc: denied { mac_admin } for pid=1869 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:43.467000 audit[1869]: AVC avc: denied { mac_admin } for pid=1869 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:43.478110 kernel: audit: type=1401 audit(1746664243.467:207): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:43.467000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:43.479049 kernel: audit: type=1300 audit(1746664243.467:207): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d11d40 a1=4000b568e8 a2=4000b6d3b0 a3=25 items=0 ppid=1 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.467000 audit[1869]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d11d40 a1=4000b568e8 a2=4000b6d3b0 a3=25 items=0 ppid=1 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.479454 kubelet[1869]: I0508 00:30:43.479425 1869 factory.go:221] Registration of the containerd container factory successfully May 8 00:30:43.481977 kernel: audit: type=1327 audit(1746664243.467:207): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:43.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:43.484635 kernel: audit: type=1325 audit(1746664243.472:208): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:43.472000 audit[1881]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:43.486005 kernel: audit: type=1300 audit(1746664243.472:208): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd3109260 a2=0 a3=1 items=0 ppid=1869 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.472000 audit[1881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd3109260 a2=0 a3=1 items=0 ppid=1869 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 8 00:30:43.478000 audit[1882]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:43.478000 audit[1882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec1ed040 a2=0 a3=1 items=0 ppid=1869 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 8 00:30:43.487000 audit[1886]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:43.487000 audit[1886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffff28a800 a2=0 a3=1 items=0 ppid=1869 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 8 00:30:43.489000 audit[1888]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:43.489000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd0548b40 a2=0 a3=1 items=0 ppid=1869 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 8 00:30:43.497000 audit[1894]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:43.497000 audit[1894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffeb2047b0 a2=0 a3=1 items=0 ppid=1869 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 8 00:30:43.497678 kubelet[1869]: I0508 00:30:43.497644 1869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:30:43.498000 audit[1897]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:43.498000 audit[1897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe8598a0 a2=0 a3=1 items=0 ppid=1869 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 8 00:30:43.498000 audit[1898]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:43.498000 audit[1898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe507e040 a2=0 a3=1 items=0 ppid=1869 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 8 00:30:43.499674 kubelet[1869]: I0508 00:30:43.499662 1869 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:30:43.499674 kubelet[1869]: I0508 00:30:43.499674 1869 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:30:43.499758 kubelet[1869]: I0508 00:30:43.499691 1869 state_mem.go:36] "Initialized new in-memory state store" May 8 00:30:43.499000 audit[1899]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:30:43.499000 audit[1899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1d70730 a2=0 a3=1 items=0 ppid=1869 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.499000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 8 00:30:43.500000 audit[1896]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:30:43.500000 audit[1896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffde0af770 a2=0 a3=1 items=0 ppid=1869 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 8 00:30:43.500825 kubelet[1869]: I0508 00:30:43.500803 1869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:30:43.501030 kubelet[1869]: I0508 00:30:43.501018 1869 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:30:43.501467 kubelet[1869]: I0508 00:30:43.501445 1869 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:30:43.501000 audit[1900]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:30:43.501000 audit[1900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe21913a0 a2=0 a3=1 items=0 ppid=1869 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 8 00:30:43.502109 kubelet[1869]: W0508 00:30:43.501963 1869 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:43.502174 kubelet[1869]: E0508 00:30:43.502142 1869 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:43.502467 kubelet[1869]: E0508 00:30:43.502419 1869 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:30:43.502000 audit[1903]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:30:43.502000 audit[1903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffedceba40 a2=0 a3=1 items=0 ppid=1869 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 8 00:30:43.503000 audit[1904]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:30:43.503000 audit[1904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeb4b92e0 a2=0 a3=1 items=0 ppid=1869 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.503000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 8 00:30:43.573227 kubelet[1869]: I0508 00:30:43.573187 1869 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:30:43.573745 kubelet[1869]: E0508 00:30:43.573701 1869 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 8 00:30:43.602899 kubelet[1869]: E0508 00:30:43.602866 1869 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:30:43.674525 kubelet[1869]: E0508 00:30:43.674483 1869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" May 8 00:30:43.753112 kubelet[1869]: I0508 00:30:43.753013 1869 policy_none.go:49] "None policy: Start" May 8 00:30:43.754420 kubelet[1869]: I0508 00:30:43.754395 1869 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:30:43.754483 kubelet[1869]: I0508 00:30:43.754454 1869 state_mem.go:35] "Initializing new in-memory state store" May 8 00:30:43.761714 kubelet[1869]: I0508 00:30:43.760864 1869 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:30:43.760000 audit[1869]: AVC avc: denied { mac_admin } for pid=1869 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:43.760000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:43.760000 audit[1869]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40010531a0 a1=4001050738 a2=4001053170 a3=25 items=0 ppid=1 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:43.760000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:43.761966 kubelet[1869]: I0508 00:30:43.761750 1869 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 8 00:30:43.761966 kubelet[1869]: I0508 00:30:43.761884 1869 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:30:43.762020 kubelet[1869]: I0508 00:30:43.761980 1869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:30:43.763104 kubelet[1869]: E0508 00:30:43.763084 1869 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:30:43.775385 kubelet[1869]: I0508 00:30:43.775364 1869 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:30:43.775875 kubelet[1869]: E0508 00:30:43.775829 1869 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 8 00:30:43.803016 kubelet[1869]: I0508 00:30:43.802966 1869 topology_manager.go:215] "Topology Admit Handler" podUID="1ecbf0a38dce867f5e6f9f9c6b1c2012" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:30:43.804364 kubelet[1869]: I0508 00:30:43.804332 1869 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:30:43.805033 kubelet[1869]: I0508 00:30:43.805011 1869 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:30:43.874767 kubelet[1869]: I0508 00:30:43.874709 1869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 8 00:30:43.874930 kubelet[1869]: I0508 00:30:43.874756 1869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 8 00:30:43.874930 kubelet[1869]: I0508 00:30:43.874822 1869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 8 00:30:43.874930 kubelet[1869]: I0508 00:30:43.874879 1869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:43.875055 kubelet[1869]: I0508 00:30:43.874909 1869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:43.875055 kubelet[1869]: I0508 00:30:43.874957 1869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:43.875055 kubelet[1869]: I0508 00:30:43.874973 1869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:43.875055 kubelet[1869]: I0508 00:30:43.874992 1869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:43.875055 kubelet[1869]: I0508 00:30:43.875038 1869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:30:44.075438 kubelet[1869]: E0508 00:30:44.075330 1869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" May 8 00:30:44.108617 kubelet[1869]: E0508 00:30:44.108570 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:44.109297 env[1315]: time="2025-05-08T00:30:44.109202918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:30:44.110258 kubelet[1869]: E0508 00:30:44.110237 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:44.110594 env[1315]: time="2025-05-08T00:30:44.110564190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ecbf0a38dce867f5e6f9f9c6b1c2012,Namespace:kube-system,Attempt:0,}" May 8 00:30:44.110950 kubelet[1869]: E0508 00:30:44.110927 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:44.111404 env[1315]: time="2025-05-08T00:30:44.111372559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:30:44.177751 kubelet[1869]: I0508 00:30:44.177710 1869 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:30:44.178047 kubelet[1869]: E0508 00:30:44.178018 1869 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 8 00:30:44.442579 kubelet[1869]: W0508 00:30:44.442444 1869 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:44.442579 kubelet[1869]: E0508 00:30:44.442507 1869 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:44.612946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289536596.mount: Deactivated successfully. May 8 00:30:44.617619 env[1315]: time="2025-05-08T00:30:44.617573857Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.620000 env[1315]: time="2025-05-08T00:30:44.619948430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.621130 env[1315]: time="2025-05-08T00:30:44.621103848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.621792 env[1315]: time="2025-05-08T00:30:44.621771941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.623142 env[1315]: time="2025-05-08T00:30:44.623108126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.625582 env[1315]: time="2025-05-08T00:30:44.625554637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.633565 env[1315]: time="2025-05-08T00:30:44.633517293Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.636994 env[1315]: time="2025-05-08T00:30:44.636957782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.638059 env[1315]: time="2025-05-08T00:30:44.638032699Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.638791 env[1315]: time="2025-05-08T00:30:44.638767409Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.639799 env[1315]: time="2025-05-08T00:30:44.639774469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.640528 env[1315]: time="2025-05-08T00:30:44.640503177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:30:44.660755 env[1315]: time="2025-05-08T00:30:44.660668824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:30:44.661449 env[1315]: time="2025-05-08T00:30:44.660730600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:30:44.661553 env[1315]: time="2025-05-08T00:30:44.661427660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:30:44.662698 env[1315]: time="2025-05-08T00:30:44.662640093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:30:44.662698 env[1315]: time="2025-05-08T00:30:44.662672821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:30:44.662698 env[1315]: time="2025-05-08T00:30:44.662682824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:30:44.662844 env[1315]: time="2025-05-08T00:30:44.662809376Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be3d4c0f05cd4d96ff280c099f7e918431c5285006eb78e0a78657df6bc3773b pid=1926 runtime=io.containerd.runc.v2 May 8 00:30:44.663038 env[1315]: time="2025-05-08T00:30:44.662980141Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/755092d63da5ce50aa838ed6788ae27d5568b614353df6d85899032f5ad35ed6 pid=1920 runtime=io.containerd.runc.v2 May 8 00:30:44.663571 env[1315]: time="2025-05-08T00:30:44.663518520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:30:44.663650 env[1315]: time="2025-05-08T00:30:44.663570493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:30:44.663988 env[1315]: time="2025-05-08T00:30:44.663949991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:30:44.664503 env[1315]: time="2025-05-08T00:30:44.664438437Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82c39e91c2db4c5c8fd70c66f302cc9255cdd9e57a3e9482b646ba5af645b87a pid=1937 runtime=io.containerd.runc.v2 May 8 00:30:44.733082 env[1315]: time="2025-05-08T00:30:44.732963930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ecbf0a38dce867f5e6f9f9c6b1c2012,Namespace:kube-system,Attempt:0,} returns sandbox id \"755092d63da5ce50aa838ed6788ae27d5568b614353df6d85899032f5ad35ed6\"" May 8 00:30:44.735183 kubelet[1869]: E0508 00:30:44.735144 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:44.735370 env[1315]: time="2025-05-08T00:30:44.735338023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"be3d4c0f05cd4d96ff280c099f7e918431c5285006eb78e0a78657df6bc3773b\"" May 8 00:30:44.735853 kubelet[1869]: E0508 00:30:44.735822 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:44.738570 env[1315]: time="2025-05-08T00:30:44.738529487Z" level=info msg="CreateContainer within sandbox \"be3d4c0f05cd4d96ff280c099f7e918431c5285006eb78e0a78657df6bc3773b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:30:44.738644 env[1315]: time="2025-05-08T00:30:44.738538729Z" level=info msg="CreateContainer within sandbox \"755092d63da5ce50aa838ed6788ae27d5568b614353df6d85899032f5ad35ed6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:30:44.752595 env[1315]: time="2025-05-08T00:30:44.752552548Z" level=info msg="CreateContainer within sandbox \"755092d63da5ce50aa838ed6788ae27d5568b614353df6d85899032f5ad35ed6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5aee44ba7a47123e23fae644dd8ba5648adbe15b12a9630a95e547e21d44dada\"" May 8 00:30:44.753165 env[1315]: time="2025-05-08T00:30:44.753135818Z" level=info msg="StartContainer for \"5aee44ba7a47123e23fae644dd8ba5648adbe15b12a9630a95e547e21d44dada\"" May 8 00:30:44.757755 env[1315]: time="2025-05-08T00:30:44.757721442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"82c39e91c2db4c5c8fd70c66f302cc9255cdd9e57a3e9482b646ba5af645b87a\"" May 8 00:30:44.758351 kubelet[1869]: E0508 00:30:44.758333 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:44.760063 env[1315]: time="2025-05-08T00:30:44.760022356Z" level=info msg="CreateContainer within sandbox \"82c39e91c2db4c5c8fd70c66f302cc9255cdd9e57a3e9482b646ba5af645b87a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:30:44.761631 env[1315]: time="2025-05-08T00:30:44.761600364Z" level=info msg="CreateContainer within sandbox \"be3d4c0f05cd4d96ff280c099f7e918431c5285006eb78e0a78657df6bc3773b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9d3674561e2f850a9114854984794ca106c6d0396a80a09f1ad581f0fca6fca1\"" May 8 00:30:44.762010 env[1315]: time="2025-05-08T00:30:44.761989784Z" level=info msg="StartContainer for \"9d3674561e2f850a9114854984794ca106c6d0396a80a09f1ad581f0fca6fca1\"" May 8 00:30:44.774224 env[1315]: time="2025-05-08T00:30:44.774173370Z" level=info msg="CreateContainer within sandbox \"82c39e91c2db4c5c8fd70c66f302cc9255cdd9e57a3e9482b646ba5af645b87a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"695f0a500ac097d7933b31cf77f838129e2b18b89e62268dad7d975b0e4cdfd5\"" May 8 00:30:44.774751 env[1315]: time="2025-05-08T00:30:44.774717270Z" level=info msg="StartContainer for \"695f0a500ac097d7933b31cf77f838129e2b18b89e62268dad7d975b0e4cdfd5\"" May 8 00:30:44.811944 kubelet[1869]: W0508 00:30:44.811885 1869 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:44.811944 kubelet[1869]: E0508 00:30:44.811950 1869 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:44.865291 kubelet[1869]: W0508 00:30:44.854645 1869 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:44.865291 kubelet[1869]: E0508 00:30:44.854702 1869 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:44.880066 kubelet[1869]: E0508 00:30:44.876133 1869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" May 8 00:30:44.904457 env[1315]: time="2025-05-08T00:30:44.904259437Z" level=info msg="StartContainer for \"5aee44ba7a47123e23fae644dd8ba5648adbe15b12a9630a95e547e21d44dada\" returns successfully" May 8 00:30:44.911468 env[1315]: time="2025-05-08T00:30:44.904877477Z" level=info msg="StartContainer for \"695f0a500ac097d7933b31cf77f838129e2b18b89e62268dad7d975b0e4cdfd5\" returns successfully" May 8 00:30:44.912033 env[1315]: time="2025-05-08T00:30:44.911987873Z" level=info msg="StartContainer for \"9d3674561e2f850a9114854984794ca106c6d0396a80a09f1ad581f0fca6fca1\" returns successfully" May 8 00:30:44.923268 kubelet[1869]: W0508 00:30:44.922882 1869 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:44.923268 kubelet[1869]: E0508 00:30:44.922952 1869 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 00:30:44.981244 kubelet[1869]: I0508 00:30:44.981207 1869 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:30:44.981674 kubelet[1869]: E0508 00:30:44.981640 1869 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 8 00:30:45.507320 kubelet[1869]: E0508 00:30:45.507292 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:45.509513 kubelet[1869]: E0508 00:30:45.509491 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:45.511481 kubelet[1869]: E0508 00:30:45.511461 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:46.513111 kubelet[1869]: E0508 00:30:46.513068 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:46.583521 kubelet[1869]: I0508 00:30:46.583480 1869 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:30:46.744087 kubelet[1869]: E0508 00:30:46.744046 1869 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:30:46.919097 kubelet[1869]: I0508 00:30:46.918995 1869 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:30:47.077884 kubelet[1869]: E0508 00:30:47.077849 1869 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 8 00:30:47.078155 kubelet[1869]: E0508 00:30:47.078135 1869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:47.463922 kubelet[1869]: I0508 00:30:47.463885 1869 apiserver.go:52] "Watching apiserver" May 8 00:30:47.472383 kubelet[1869]: I0508 00:30:47.472329 1869 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:30:48.675147 systemd[1]: Reloading. May 8 00:30:48.716942 /usr/lib/systemd/system-generators/torcx-generator[2170]: time="2025-05-08T00:30:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:30:48.716970 /usr/lib/systemd/system-generators/torcx-generator[2170]: time="2025-05-08T00:30:48Z" level=info msg="torcx already run" May 8 00:30:48.782528 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:30:48.782549 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:30:48.798706 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:30:48.868187 kubelet[1869]: I0508 00:30:48.868087 1869 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:30:48.868258 systemd[1]: Stopping kubelet.service... May 8 00:30:48.890590 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:30:48.890901 systemd[1]: Stopped kubelet.service. May 8 00:30:48.893152 kernel: kauditd_printk_skb: 38 callbacks suppressed May 8 00:30:48.893224 kernel: audit: type=1131 audit(1746664248.889:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:48.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:48.892897 systemd[1]: Starting kubelet.service... May 8 00:30:48.979059 systemd[1]: Started kubelet.service. May 8 00:30:48.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:48.982309 kernel: audit: type=1130 audit(1746664248.978:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:49.015130 kubelet[2224]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:30:49.015130 kubelet[2224]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:30:49.015130 kubelet[2224]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:30:49.015505 kubelet[2224]: I0508 00:30:49.015181 2224 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:30:49.019337 kubelet[2224]: I0508 00:30:49.019309 2224 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:30:49.019337 kubelet[2224]: I0508 00:30:49.019334 2224 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:30:49.019899 kubelet[2224]: I0508 00:30:49.019870 2224 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:30:49.021742 kubelet[2224]: I0508 00:30:49.021719 2224 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:30:49.022954 kubelet[2224]: I0508 00:30:49.022917 2224 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:30:49.028126 kubelet[2224]: I0508 00:30:49.028102 2224 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:30:49.028636 kubelet[2224]: I0508 00:30:49.028605 2224 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:30:49.028882 kubelet[2224]: I0508 00:30:49.028717 2224 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:30:49.029007 kubelet[2224]: I0508 00:30:49.028993 2224 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:30:49.029073 kubelet[2224]: I0508 00:30:49.029062 2224 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:30:49.029158 kubelet[2224]: I0508 00:30:49.029148 2224 state_mem.go:36] "Initialized new in-memory state store" May 8 00:30:49.029331 kubelet[2224]: I0508 00:30:49.029318 2224 kubelet.go:400] "Attempting to sync node with API server" May 8 00:30:49.029401 kubelet[2224]: I0508 00:30:49.029391 2224 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:30:49.029474 kubelet[2224]: I0508 00:30:49.029464 2224 kubelet.go:312] "Adding apiserver pod source" May 8 00:30:49.029535 kubelet[2224]: I0508 00:30:49.029526 2224 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:30:49.041796 kubelet[2224]: I0508 00:30:49.041769 2224 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:30:49.042106 kubelet[2224]: I0508 00:30:49.042067 2224 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:30:49.043355 kubelet[2224]: I0508 00:30:49.043321 2224 server.go:1264] "Started kubelet" May 8 00:30:49.043848 kubelet[2224]: I0508 00:30:49.043809 2224 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:30:49.045056 kubelet[2224]: I0508 00:30:49.044985 2224 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:30:49.045264 kubelet[2224]: I0508 00:30:49.045247 2224 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:30:49.049547 kubelet[2224]: I0508 00:30:49.049525 2224 server.go:455] "Adding debug handlers to kubelet server" May 8 00:30:49.050098 kubelet[2224]: E0508 00:30:49.050065 2224 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:30:49.048000 audit[2224]: AVC avc: denied { mac_admin } for pid=2224 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:49.052305 kubelet[2224]: I0508 00:30:49.052253 2224 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 8 00:30:49.052371 kubelet[2224]: I0508 00:30:49.052324 2224 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 8 00:30:49.052371 kubelet[2224]: I0508 00:30:49.052353 2224 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:30:49.048000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:49.053015 kubelet[2224]: I0508 00:30:49.052637 2224 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:30:49.053015 kubelet[2224]: I0508 00:30:49.052728 2224 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:30:49.053015 kubelet[2224]: I0508 00:30:49.052850 2224 reconciler.go:26] "Reconciler: start to sync state" May 8 00:30:49.053469 kernel: audit: type=1400 audit(1746664249.048:223): avc: denied { mac_admin } for pid=2224 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:49.053523 kernel: audit: type=1401 audit(1746664249.048:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:49.053542 kernel: audit: type=1300 audit(1746664249.048:223): arch=c00000b7 syscall=5 success=no exit=-22 a0=40008a34d0 a1=4000175140 a2=40008a34a0 a3=25 items=0 ppid=1 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:49.048000 audit[2224]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008a34d0 a1=4000175140 a2=40008a34a0 a3=25 items=0 ppid=1 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:49.054643 kubelet[2224]: I0508 00:30:49.054624 2224 factory.go:221] Registration of the systemd container factory successfully May 8 00:30:49.054736 kubelet[2224]: I0508 00:30:49.054711 2224 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:30:49.055609 kubelet[2224]: I0508 00:30:49.055590 2224 factory.go:221] Registration of the containerd container factory successfully May 8 00:30:49.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:49.060239 kernel: audit: type=1327 audit(1746664249.048:223): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:49.051000 audit[2224]: AVC avc: denied { mac_admin } for pid=2224 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:49.063578 kernel: audit: type=1400 audit(1746664249.051:224): avc: denied { mac_admin } for pid=2224 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:49.051000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:49.065409 kernel: audit: type=1401 audit(1746664249.051:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:49.065463 kernel: audit: type=1300 audit(1746664249.051:224): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e8c080 a1=4000174000 a2=40008a2060 a3=25 items=0 ppid=1 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:49.051000 audit[2224]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e8c080 a1=4000174000 a2=40008a2060 a3=25 items=0 ppid=1 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:49.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:49.070510 kubelet[2224]: I0508 00:30:49.070471 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:30:49.071277 kubelet[2224]: I0508 00:30:49.071247 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:30:49.071330 kubelet[2224]: I0508 00:30:49.071289 2224 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:30:49.071330 kubelet[2224]: I0508 00:30:49.071304 2224 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:30:49.071378 kubelet[2224]: E0508 00:30:49.071344 2224 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:30:49.072869 kernel: audit: type=1327 audit(1746664249.051:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:49.111738 kubelet[2224]: I0508 00:30:49.111711 2224 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:30:49.111738 kubelet[2224]: I0508 00:30:49.111731 2224 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:30:49.111879 kubelet[2224]: I0508 00:30:49.111750 2224 state_mem.go:36] "Initialized new in-memory state store" May 8 00:30:49.111924 kubelet[2224]: I0508 00:30:49.111904 2224 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:30:49.111978 kubelet[2224]: I0508 00:30:49.111921 2224 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:30:49.111978 kubelet[2224]: I0508 00:30:49.111940 2224 policy_none.go:49] "None policy: Start" May 8 00:30:49.112647 kubelet[2224]: I0508 00:30:49.112630 2224 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:30:49.112736 kubelet[2224]: I0508 00:30:49.112726 2224 state_mem.go:35] "Initializing new in-memory state store" May 8 00:30:49.112927 kubelet[2224]: I0508 00:30:49.112913 2224 state_mem.go:75] "Updated machine memory state" May 8 00:30:49.114103 kubelet[2224]: I0508 00:30:49.114076 2224 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:30:49.112000 audit[2224]: AVC avc: denied { mac_admin } for pid=2224 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:30:49.112000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:30:49.112000 audit[2224]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c98c60 a1=4000b5fde8 a2=4000c98c00 a3=25 items=0 ppid=1 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:30:49.112000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:30:49.114434 kubelet[2224]: I0508 00:30:49.114212 2224 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 8 00:30:49.114660 kubelet[2224]: I0508 00:30:49.114620 2224 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:30:49.115224 kubelet[2224]: I0508 00:30:49.115192 2224 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:30:49.156305 kubelet[2224]: I0508 00:30:49.156253 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:30:49.162905 kubelet[2224]: I0508 00:30:49.162092 2224 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:30:49.162905 kubelet[2224]: I0508 00:30:49.162215 2224 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:30:49.171477 kubelet[2224]: I0508 00:30:49.171438 2224 topology_manager.go:215] "Topology Admit Handler" podUID="1ecbf0a38dce867f5e6f9f9c6b1c2012" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:30:49.171569 kubelet[2224]: I0508 00:30:49.171541 2224 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:30:49.171594 kubelet[2224]: I0508 00:30:49.171579 2224 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:30:49.354169 kubelet[2224]: I0508 00:30:49.354064 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 8 00:30:49.354169 kubelet[2224]: I0508 00:30:49.354105 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:49.354169 kubelet[2224]: I0508 00:30:49.354124 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:49.354169 kubelet[2224]: I0508 00:30:49.354139 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:49.354169 kubelet[2224]: I0508 00:30:49.354154 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:49.354400 kubelet[2224]: I0508 00:30:49.354171 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 8 00:30:49.354400 kubelet[2224]: I0508 00:30:49.354186 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 8 00:30:49.354400 kubelet[2224]: I0508 00:30:49.354202 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:30:49.354400 kubelet[2224]: I0508 00:30:49.354220 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:30:49.476889 kubelet[2224]: E0508 00:30:49.476856 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:49.478801 kubelet[2224]: E0508 00:30:49.478774 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:49.478895 kubelet[2224]: E0508 00:30:49.478805 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:50.030330 kubelet[2224]: I0508 00:30:50.030261 2224 apiserver.go:52] "Watching apiserver" May 8 00:30:50.053019 kubelet[2224]: I0508 00:30:50.052988 2224 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:30:50.085021 kubelet[2224]: E0508 00:30:50.084975 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:50.085797 kubelet[2224]: E0508 00:30:50.085775 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:50.094297 kubelet[2224]: E0508 00:30:50.093608 2224 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:30:50.094297 kubelet[2224]: E0508 00:30:50.094062 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:50.112453 kubelet[2224]: I0508 00:30:50.112384 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.112366961 podStartE2EDuration="1.112366961s" podCreationTimestamp="2025-05-08 00:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:30:50.104706279 +0000 UTC m=+1.121385693" watchObservedRunningTime="2025-05-08 00:30:50.112366961 +0000 UTC m=+1.129046295" May 8 00:30:50.112618 kubelet[2224]: I0508 00:30:50.112496 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.112490816 podStartE2EDuration="1.112490816s" podCreationTimestamp="2025-05-08 00:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:30:50.112349039 +0000 UTC m=+1.129028373" watchObservedRunningTime="2025-05-08 00:30:50.112490816 +0000 UTC m=+1.129170150" May 8 00:30:50.135150 kubelet[2224]: I0508 00:30:50.135094 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.135077018 podStartE2EDuration="1.135077018s" podCreationTimestamp="2025-05-08 00:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:30:50.119849303 +0000 UTC m=+1.136528717" watchObservedRunningTime="2025-05-08 00:30:50.135077018 +0000 UTC m=+1.151756312" May 8 00:30:51.086463 kubelet[2224]: E0508 00:30:51.086436 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:52.132033 kubelet[2224]: E0508 00:30:52.131992 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:54.086347 sudo[1479]: pam_unix(sudo:session): session closed for user root May 8 00:30:54.085000 audit[1479]: USER_END pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:54.089579 kernel: kauditd_printk_skb: 4 callbacks suppressed May 8 00:30:54.089652 kernel: audit: type=1106 audit(1746664254.085:226): pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:54.086000 audit[1479]: CRED_DISP pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:54.092261 kernel: audit: type=1104 audit(1746664254.086:227): pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:30:54.092511 sshd[1473]: pam_unix(sshd:session): session closed for user core May 8 00:30:54.092000 audit[1473]: USER_END pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:54.093000 audit[1473]: CRED_DISP pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:54.096133 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:45708.service: Deactivated successfully. May 8 00:30:54.099161 kernel: audit: type=1106 audit(1746664254.092:228): pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:54.099234 kernel: audit: type=1104 audit(1746664254.093:229): pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:30:54.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.15:22-10.0.0.1:45708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:54.099581 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:30:54.101622 kernel: audit: type=1131 audit(1746664254.095:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.15:22-10.0.0.1:45708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:30:54.101798 systemd-logind[1297]: Session 7 logged out. Waiting for processes to exit. May 8 00:30:54.102745 systemd-logind[1297]: Removed session 7. May 8 00:30:55.489123 kubelet[2224]: E0508 00:30:55.489083 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:56.093423 kubelet[2224]: E0508 00:30:56.093392 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:30:59.914712 kubelet[2224]: E0508 00:30:59.914684 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:00.956053 update_engine[1299]: I0508 00:31:00.955997 1299 update_attempter.cc:509] Updating boot flags... May 8 00:31:02.139716 kubelet[2224]: E0508 00:31:02.139686 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:03.574845 kubelet[2224]: I0508 00:31:03.574800 2224 topology_manager.go:215] "Topology Admit Handler" podUID="2f5edad4-4ec1-4cf5-9be9-8634c121cb6b" podNamespace="kube-system" podName="kube-proxy-bg9t7" May 8 00:31:03.582675 kubelet[2224]: I0508 00:31:03.582649 2224 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:31:03.583359 env[1315]: time="2025-05-08T00:31:03.583157987Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:31:03.584043 kubelet[2224]: I0508 00:31:03.583385 2224 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:31:03.661267 kubelet[2224]: I0508 00:31:03.661215 2224 topology_manager.go:215] "Topology Admit Handler" podUID="988f1b3b-e3ac-4354-9735-28bdbc4eb0af" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-zlg88" May 8 00:31:03.661453 kubelet[2224]: I0508 00:31:03.661429 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f5edad4-4ec1-4cf5-9be9-8634c121cb6b-lib-modules\") pod \"kube-proxy-bg9t7\" (UID: \"2f5edad4-4ec1-4cf5-9be9-8634c121cb6b\") " pod="kube-system/kube-proxy-bg9t7" May 8 00:31:03.661540 kubelet[2224]: I0508 00:31:03.661456 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l66s\" (UniqueName: \"kubernetes.io/projected/2f5edad4-4ec1-4cf5-9be9-8634c121cb6b-kube-api-access-6l66s\") pod \"kube-proxy-bg9t7\" (UID: \"2f5edad4-4ec1-4cf5-9be9-8634c121cb6b\") " pod="kube-system/kube-proxy-bg9t7" May 8 00:31:03.661540 kubelet[2224]: I0508 00:31:03.661478 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f5edad4-4ec1-4cf5-9be9-8634c121cb6b-kube-proxy\") pod \"kube-proxy-bg9t7\" (UID: \"2f5edad4-4ec1-4cf5-9be9-8634c121cb6b\") " pod="kube-system/kube-proxy-bg9t7" May 8 00:31:03.661540 kubelet[2224]: I0508 00:31:03.661495 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f5edad4-4ec1-4cf5-9be9-8634c121cb6b-xtables-lock\") pod \"kube-proxy-bg9t7\" (UID: \"2f5edad4-4ec1-4cf5-9be9-8634c121cb6b\") " pod="kube-system/kube-proxy-bg9t7" May 8 00:31:03.762486 kubelet[2224]: I0508 00:31:03.762418 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/988f1b3b-e3ac-4354-9735-28bdbc4eb0af-var-lib-calico\") pod \"tigera-operator-797db67f8-zlg88\" (UID: \"988f1b3b-e3ac-4354-9735-28bdbc4eb0af\") " pod="tigera-operator/tigera-operator-797db67f8-zlg88" May 8 00:31:03.762486 kubelet[2224]: I0508 00:31:03.762463 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnb7\" (UniqueName: \"kubernetes.io/projected/988f1b3b-e3ac-4354-9735-28bdbc4eb0af-kube-api-access-zqnb7\") pod \"tigera-operator-797db67f8-zlg88\" (UID: \"988f1b3b-e3ac-4354-9735-28bdbc4eb0af\") " pod="tigera-operator/tigera-operator-797db67f8-zlg88" May 8 00:31:03.878030 kubelet[2224]: E0508 00:31:03.877934 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:03.879302 env[1315]: time="2025-05-08T00:31:03.878887907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bg9t7,Uid:2f5edad4-4ec1-4cf5-9be9-8634c121cb6b,Namespace:kube-system,Attempt:0,}" May 8 00:31:03.894141 env[1315]: time="2025-05-08T00:31:03.893956053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:03.894141 env[1315]: time="2025-05-08T00:31:03.893996655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:03.894141 env[1315]: time="2025-05-08T00:31:03.894007056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:03.894324 env[1315]: time="2025-05-08T00:31:03.894220948Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e619968217624f8bc561a3bddbe1633e59e80bb71c8f44eef95bbdeb71d1b1f pid=2334 runtime=io.containerd.runc.v2 May 8 00:31:03.935419 env[1315]: time="2025-05-08T00:31:03.935380154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bg9t7,Uid:2f5edad4-4ec1-4cf5-9be9-8634c121cb6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e619968217624f8bc561a3bddbe1633e59e80bb71c8f44eef95bbdeb71d1b1f\"" May 8 00:31:03.937113 kubelet[2224]: E0508 00:31:03.935968 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:03.941009 env[1315]: time="2025-05-08T00:31:03.940175710Z" level=info msg="CreateContainer within sandbox \"7e619968217624f8bc561a3bddbe1633e59e80bb71c8f44eef95bbdeb71d1b1f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:31:03.952847 env[1315]: time="2025-05-08T00:31:03.952792795Z" level=info msg="CreateContainer within sandbox \"7e619968217624f8bc561a3bddbe1633e59e80bb71c8f44eef95bbdeb71d1b1f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2be34f354be231e23e0c997a6e63696d25fba49fce87f22c98262308c894f46d\"" May 8 00:31:03.954470 env[1315]: time="2025-05-08T00:31:03.953541198Z" level=info msg="StartContainer for \"2be34f354be231e23e0c997a6e63696d25fba49fce87f22c98262308c894f46d\"" May 8 00:31:03.964991 env[1315]: time="2025-05-08T00:31:03.964953174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-zlg88,Uid:988f1b3b-e3ac-4354-9735-28bdbc4eb0af,Namespace:tigera-operator,Attempt:0,}" May 8 00:31:03.986324 env[1315]: time="2025-05-08T00:31:03.983475038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:03.986324 env[1315]: time="2025-05-08T00:31:03.983515761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:03.986324 env[1315]: time="2025-05-08T00:31:03.983526601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:03.986324 env[1315]: time="2025-05-08T00:31:03.983737534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/096b4e3921356c6474dc2d3da31ca2c09f6c8a7913e1141eb42fe79d0e0abba8 pid=2396 runtime=io.containerd.runc.v2 May 8 00:31:04.018886 env[1315]: time="2025-05-08T00:31:04.018842783Z" level=info msg="StartContainer for \"2be34f354be231e23e0c997a6e63696d25fba49fce87f22c98262308c894f46d\" returns successfully" May 8 00:31:04.044833 env[1315]: time="2025-05-08T00:31:04.044791444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-zlg88,Uid:988f1b3b-e3ac-4354-9735-28bdbc4eb0af,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"096b4e3921356c6474dc2d3da31ca2c09f6c8a7913e1141eb42fe79d0e0abba8\"" May 8 00:31:04.048200 env[1315]: time="2025-05-08T00:31:04.048132907Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:31:04.106341 kubelet[2224]: E0508 00:31:04.106309 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:04.115694 kubelet[2224]: I0508 00:31:04.115636 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bg9t7" podStartSLOduration=1.115619441 podStartE2EDuration="1.115619441s" podCreationTimestamp="2025-05-08 00:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:31:04.115399828 +0000 UTC m=+15.132079162" watchObservedRunningTime="2025-05-08 00:31:04.115619441 +0000 UTC m=+15.132298775" May 8 00:31:04.145000 audit[2469]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.145000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd231ab20 a2=0 a3=1 items=0 ppid=2386 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.151137 kernel: audit: type=1325 audit(1746664264.145:231): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.151223 kernel: audit: type=1300 audit(1746664264.145:231): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd231ab20 a2=0 a3=1 items=0 ppid=2386 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 8 00:31:04.152854 kernel: audit: type=1327 audit(1746664264.145:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 8 00:31:04.152902 kernel: audit: type=1325 audit(1746664264.145:232): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.145000 audit[2470]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.145000 audit[2470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc77a1310 a2=0 a3=1 items=0 ppid=2386 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.156898 kernel: audit: type=1300 audit(1746664264.145:232): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc77a1310 a2=0 a3=1 items=0 ppid=2386 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.156968 kernel: audit: type=1327 audit(1746664264.145:232): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 8 00:31:04.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 8 00:31:04.146000 audit[2471]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.159565 kernel: audit: type=1325 audit(1746664264.146:233): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.146000 audit[2471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7c67ee0 a2=0 a3=1 items=0 ppid=2386 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.162212 kernel: audit: type=1300 audit(1746664264.146:233): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7c67ee0 a2=0 a3=1 items=0 ppid=2386 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.162300 kernel: audit: type=1327 audit(1746664264.146:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 8 00:31:04.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 8 00:31:04.163467 kernel: audit: type=1325 audit(1746664264.147:234): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.147000 audit[2472]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.147000 audit[2472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7d6ba70 a2=0 a3=1 items=0 ppid=2386 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 8 00:31:04.148000 audit[2473]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.148000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecc9cfb0 a2=0 a3=1 items=0 ppid=2386 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 8 00:31:04.150000 audit[2474]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.150000 audit[2474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe97e05e0 a2=0 a3=1 items=0 ppid=2386 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 8 00:31:04.247000 audit[2475]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.247000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc8970940 a2=0 a3=1 items=0 ppid=2386 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 8 00:31:04.252000 audit[2477]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.252000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffc7add50 a2=0 a3=1 items=0 ppid=2386 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 8 00:31:04.258000 audit[2480]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.258000 audit[2480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcfc56830 a2=0 a3=1 items=0 ppid=2386 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.258000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 8 00:31:04.259000 audit[2481]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.259000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7326c10 a2=0 a3=1 items=0 ppid=2386 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 8 00:31:04.261000 audit[2483]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.261000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff3e1f430 a2=0 a3=1 items=0 ppid=2386 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 8 00:31:04.262000 audit[2484]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.262000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc3d7f20 a2=0 a3=1 items=0 ppid=2386 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 8 00:31:04.264000 audit[2486]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.264000 audit[2486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd0a2b9f0 a2=0 a3=1 items=0 ppid=2386 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 8 00:31:04.267000 audit[2489]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.267000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd1181a90 a2=0 a3=1 items=0 ppid=2386 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 8 00:31:04.268000 audit[2490]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.268000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0039c30 a2=0 a3=1 items=0 ppid=2386 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 8 00:31:04.271000 audit[2492]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.271000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdf48d540 a2=0 a3=1 items=0 ppid=2386 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 8 00:31:04.273000 audit[2493]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.273000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff9222010 a2=0 a3=1 items=0 ppid=2386 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 8 00:31:04.275000 audit[2495]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.275000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff22b2380 a2=0 a3=1 items=0 ppid=2386 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 8 00:31:04.278000 audit[2498]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.278000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffefc3cda0 a2=0 a3=1 items=0 ppid=2386 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 8 00:31:04.281000 audit[2501]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.281000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc76c95b0 a2=0 a3=1 items=0 ppid=2386 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 8 00:31:04.282000 audit[2502]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.282000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff13fba40 a2=0 a3=1 items=0 ppid=2386 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.282000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 8 00:31:04.284000 audit[2504]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.284000 audit[2504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe799f600 a2=0 a3=1 items=0 ppid=2386 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 8 00:31:04.288000 audit[2507]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.288000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc865cb50 a2=0 a3=1 items=0 ppid=2386 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 8 00:31:04.289000 audit[2508]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.289000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffceb73c00 a2=0 a3=1 items=0 ppid=2386 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.289000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 8 00:31:04.291000 audit[2510]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:31:04.291000 audit[2510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffebe205e0 a2=0 a3=1 items=0 ppid=2386 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 8 00:31:04.309000 audit[2516]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:04.309000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff30a7f70 a2=0 a3=1 items=0 ppid=2386 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:04.326000 audit[2516]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:04.326000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff30a7f70 a2=0 a3=1 items=0 ppid=2386 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:04.327000 audit[2521]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.327000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdf664e00 a2=0 a3=1 items=0 ppid=2386 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 8 00:31:04.329000 audit[2523]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.329000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdc5e26b0 a2=0 a3=1 items=0 ppid=2386 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.329000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 8 00:31:04.333000 audit[2526]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.333000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcc1422e0 a2=0 a3=1 items=0 ppid=2386 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 8 00:31:04.334000 audit[2527]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.334000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc88ecd70 a2=0 a3=1 items=0 ppid=2386 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.334000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 8 00:31:04.336000 audit[2529]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.336000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8ad0fa0 a2=0 a3=1 items=0 ppid=2386 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 8 00:31:04.337000 audit[2530]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.337000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6624010 a2=0 a3=1 items=0 ppid=2386 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 8 00:31:04.339000 audit[2532]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.339000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd53328c0 a2=0 a3=1 items=0 ppid=2386 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 8 00:31:04.343000 audit[2535]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.343000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe9754aa0 a2=0 a3=1 items=0 ppid=2386 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.343000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 8 00:31:04.344000 audit[2536]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.344000 audit[2536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea34aca0 a2=0 a3=1 items=0 ppid=2386 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.344000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 8 00:31:04.346000 audit[2538]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.346000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc76d02c0 a2=0 a3=1 items=0 ppid=2386 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 8 00:31:04.347000 audit[2539]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.347000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1604580 a2=0 a3=1 items=0 ppid=2386 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 8 00:31:04.349000 audit[2541]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.349000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcf14e780 a2=0 a3=1 items=0 ppid=2386 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 8 00:31:04.352000 audit[2544]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.352000 audit[2544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffad7ad60 a2=0 a3=1 items=0 ppid=2386 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 8 00:31:04.356000 audit[2547]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.356000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb383e10 a2=0 a3=1 items=0 ppid=2386 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 8 00:31:04.357000 audit[2548]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.357000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd1ac31c0 a2=0 a3=1 items=0 ppid=2386 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 8 00:31:04.359000 audit[2550]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.359000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd8fec000 a2=0 a3=1 items=0 ppid=2386 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 8 00:31:04.362000 audit[2553]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.362000 audit[2553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd4dc8420 a2=0 a3=1 items=0 ppid=2386 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 8 00:31:04.363000 audit[2554]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.363000 audit[2554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa9cdac0 a2=0 a3=1 items=0 ppid=2386 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.363000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 8 00:31:04.365000 audit[2556]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.365000 audit[2556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff0822aa0 a2=0 a3=1 items=0 ppid=2386 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 8 00:31:04.366000 audit[2557]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.366000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe85e4ad0 a2=0 a3=1 items=0 ppid=2386 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 8 00:31:04.368000 audit[2559]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.368000 audit[2559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcc419fc0 a2=0 a3=1 items=0 ppid=2386 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 8 00:31:04.370000 audit[2562]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:31:04.370000 audit[2562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffeb07e9a0 a2=0 a3=1 items=0 ppid=2386 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 8 00:31:04.373000 audit[2564]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 8 00:31:04.373000 audit[2564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffc129bf70 a2=0 a3=1 items=0 ppid=2386 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.373000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:04.373000 audit[2564]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 8 00:31:04.373000 audit[2564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc129bf70 a2=0 a3=1 items=0 ppid=2386 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:04.373000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:05.450304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027127326.mount: Deactivated successfully. May 8 00:31:05.980802 env[1315]: time="2025-05-08T00:31:05.980742811Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:05.982073 env[1315]: time="2025-05-08T00:31:05.982041119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:05.983626 env[1315]: time="2025-05-08T00:31:05.983592559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:05.984901 env[1315]: time="2025-05-08T00:31:05.984872826Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:05.986162 env[1315]: time="2025-05-08T00:31:05.986104291Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 8 00:31:05.989270 env[1315]: time="2025-05-08T00:31:05.989235054Z" level=info msg="CreateContainer within sandbox \"096b4e3921356c6474dc2d3da31ca2c09f6c8a7913e1141eb42fe79d0e0abba8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:31:05.999375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861085446.mount: Deactivated successfully. May 8 00:31:06.003577 env[1315]: time="2025-05-08T00:31:06.003539435Z" level=info msg="CreateContainer within sandbox \"096b4e3921356c6474dc2d3da31ca2c09f6c8a7913e1141eb42fe79d0e0abba8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2bdee454df0ae5c194c3ebf806a2463521259ee1bee0f4b3a59955c4b7168d65\"" May 8 00:31:06.005448 env[1315]: time="2025-05-08T00:31:06.005408888Z" level=info msg="StartContainer for \"2bdee454df0ae5c194c3ebf806a2463521259ee1bee0f4b3a59955c4b7168d65\"" May 8 00:31:06.062468 env[1315]: time="2025-05-08T00:31:06.062423004Z" level=info msg="StartContainer for \"2bdee454df0ae5c194c3ebf806a2463521259ee1bee0f4b3a59955c4b7168d65\" returns successfully" May 8 00:31:09.632000 audit[2605]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:09.635847 kernel: kauditd_printk_skb: 143 callbacks suppressed May 8 00:31:09.635913 kernel: audit: type=1325 audit(1746664269.632:282): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:09.632000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff42f5c00 a2=0 a3=1 items=0 ppid=2386 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:09.639069 kernel: audit: type=1300 audit(1746664269.632:282): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff42f5c00 a2=0 a3=1 items=0 ppid=2386 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:09.639127 kernel: audit: type=1327 audit(1746664269.632:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:09.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:09.644000 audit[2605]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:09.647290 kernel: audit: type=1325 audit(1746664269.644:283): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:09.647357 kernel: audit: type=1300 audit(1746664269.644:283): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff42f5c00 a2=0 a3=1 items=0 ppid=2386 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:09.644000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff42f5c00 a2=0 a3=1 items=0 ppid=2386 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:09.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:09.651590 kernel: audit: type=1327 audit(1746664269.644:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:09.655000 audit[2607]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:09.655000 audit[2607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffffbd05a90 a2=0 a3=1 items=0 ppid=2386 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:09.661439 kernel: audit: type=1325 audit(1746664269.655:284): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:09.661492 kernel: audit: type=1300 audit(1746664269.655:284): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffffbd05a90 a2=0 a3=1 items=0 ppid=2386 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:09.661509 kernel: audit: type=1327 audit(1746664269.655:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:09.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:09.667000 audit[2607]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:09.667000 audit[2607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffbd05a90 a2=0 a3=1 items=0 ppid=2386 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:09.670376 kernel: audit: type=1325 audit(1746664269.667:285): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:09.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:10.041177 kubelet[2224]: I0508 00:31:10.041097 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-zlg88" podStartSLOduration=5.101739981 podStartE2EDuration="7.041079358s" podCreationTimestamp="2025-05-08 00:31:03 +0000 UTC" firstStartedPulling="2025-05-08 00:31:04.047475551 +0000 UTC m=+15.064154885" lastFinishedPulling="2025-05-08 00:31:05.986814968 +0000 UTC m=+17.003494262" observedRunningTime="2025-05-08 00:31:06.12224266 +0000 UTC m=+17.138921994" watchObservedRunningTime="2025-05-08 00:31:10.041079358 +0000 UTC m=+21.057758692" May 8 00:31:10.041615 kubelet[2224]: I0508 00:31:10.041247 2224 topology_manager.go:215] "Topology Admit Handler" podUID="0abf581b-49da-4047-b094-be9724eb9230" podNamespace="calico-system" podName="calico-typha-5bb45c76c7-chgs7" May 8 00:31:10.043299 kubelet[2224]: W0508 00:31:10.043248 2224 reflector.go:547] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:31:10.043299 kubelet[2224]: E0508 00:31:10.043298 2224 reflector.go:150] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:31:10.043772 kubelet[2224]: W0508 00:31:10.043732 2224 reflector.go:547] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:31:10.043772 kubelet[2224]: E0508 00:31:10.043762 2224 reflector.go:150] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:31:10.048068 kubelet[2224]: W0508 00:31:10.048028 2224 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:31:10.048068 kubelet[2224]: E0508 00:31:10.048061 2224 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:31:10.099894 kubelet[2224]: I0508 00:31:10.099836 2224 topology_manager.go:215] "Topology Admit Handler" podUID="941ee7e7-d02f-426f-80dc-e0162e58774f" podNamespace="calico-system" podName="calico-node-gr7rq" May 8 00:31:10.109113 kubelet[2224]: I0508 00:31:10.109075 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0abf581b-49da-4047-b094-be9724eb9230-typha-certs\") pod \"calico-typha-5bb45c76c7-chgs7\" (UID: \"0abf581b-49da-4047-b094-be9724eb9230\") " pod="calico-system/calico-typha-5bb45c76c7-chgs7" May 8 00:31:10.109113 kubelet[2224]: I0508 00:31:10.109119 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qgx\" (UniqueName: \"kubernetes.io/projected/0abf581b-49da-4047-b094-be9724eb9230-kube-api-access-97qgx\") pod \"calico-typha-5bb45c76c7-chgs7\" (UID: \"0abf581b-49da-4047-b094-be9724eb9230\") " pod="calico-system/calico-typha-5bb45c76c7-chgs7" May 8 00:31:10.109313 kubelet[2224]: I0508 00:31:10.109142 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abf581b-49da-4047-b094-be9724eb9230-tigera-ca-bundle\") pod \"calico-typha-5bb45c76c7-chgs7\" (UID: \"0abf581b-49da-4047-b094-be9724eb9230\") " pod="calico-system/calico-typha-5bb45c76c7-chgs7" May 8 00:31:10.209557 kubelet[2224]: I0508 00:31:10.209468 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/941ee7e7-d02f-426f-80dc-e0162e58774f-xtables-lock\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209557 kubelet[2224]: I0508 00:31:10.209547 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/941ee7e7-d02f-426f-80dc-e0162e58774f-var-run-calico\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209557 kubelet[2224]: I0508 00:31:10.209565 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/941ee7e7-d02f-426f-80dc-e0162e58774f-var-lib-calico\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209761 kubelet[2224]: I0508 00:31:10.209585 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/941ee7e7-d02f-426f-80dc-e0162e58774f-cni-log-dir\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209761 kubelet[2224]: I0508 00:31:10.209609 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2jkn\" (UniqueName: \"kubernetes.io/projected/941ee7e7-d02f-426f-80dc-e0162e58774f-kube-api-access-c2jkn\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209761 kubelet[2224]: I0508 00:31:10.209642 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/941ee7e7-d02f-426f-80dc-e0162e58774f-policysync\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209761 kubelet[2224]: I0508 00:31:10.209682 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/941ee7e7-d02f-426f-80dc-e0162e58774f-node-certs\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209761 kubelet[2224]: I0508 00:31:10.209724 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/941ee7e7-d02f-426f-80dc-e0162e58774f-cni-bin-dir\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209882 kubelet[2224]: I0508 00:31:10.209790 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/941ee7e7-d02f-426f-80dc-e0162e58774f-tigera-ca-bundle\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209882 kubelet[2224]: I0508 00:31:10.209808 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/941ee7e7-d02f-426f-80dc-e0162e58774f-flexvol-driver-host\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209882 kubelet[2224]: I0508 00:31:10.209849 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/941ee7e7-d02f-426f-80dc-e0162e58774f-lib-modules\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.209952 kubelet[2224]: I0508 00:31:10.209895 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/941ee7e7-d02f-426f-80dc-e0162e58774f-cni-net-dir\") pod \"calico-node-gr7rq\" (UID: \"941ee7e7-d02f-426f-80dc-e0162e58774f\") " pod="calico-system/calico-node-gr7rq" May 8 00:31:10.298749 kubelet[2224]: I0508 00:31:10.298621 2224 topology_manager.go:215] "Topology Admit Handler" podUID="f2615509-fc42-4214-b9b8-44dfb15979ff" podNamespace="calico-system" podName="csi-node-driver-76g2m" May 8 00:31:10.298919 kubelet[2224]: E0508 00:31:10.298892 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76g2m" podUID="f2615509-fc42-4214-b9b8-44dfb15979ff" May 8 00:31:10.318409 kubelet[2224]: E0508 00:31:10.318373 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.318409 kubelet[2224]: W0508 00:31:10.318400 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.318556 kubelet[2224]: E0508 00:31:10.318420 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.397829 kubelet[2224]: E0508 00:31:10.397796 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.398001 kubelet[2224]: W0508 00:31:10.397980 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.398069 kubelet[2224]: E0508 00:31:10.398056 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.398293 kubelet[2224]: E0508 00:31:10.398268 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.398377 kubelet[2224]: W0508 00:31:10.398363 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.398436 kubelet[2224]: E0508 00:31:10.398425 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.398677 kubelet[2224]: E0508 00:31:10.398663 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.398756 kubelet[2224]: W0508 00:31:10.398742 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.398819 kubelet[2224]: E0508 00:31:10.398807 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.399089 kubelet[2224]: E0508 00:31:10.399075 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.399169 kubelet[2224]: W0508 00:31:10.399156 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.399233 kubelet[2224]: E0508 00:31:10.399221 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.399497 kubelet[2224]: E0508 00:31:10.399483 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.399570 kubelet[2224]: W0508 00:31:10.399557 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.399641 kubelet[2224]: E0508 00:31:10.399629 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.399835 kubelet[2224]: E0508 00:31:10.399823 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.399908 kubelet[2224]: W0508 00:31:10.399894 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.399978 kubelet[2224]: E0508 00:31:10.399959 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.400175 kubelet[2224]: E0508 00:31:10.400164 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.400249 kubelet[2224]: W0508 00:31:10.400237 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.400345 kubelet[2224]: E0508 00:31:10.400324 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.400564 kubelet[2224]: E0508 00:31:10.400552 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.400633 kubelet[2224]: W0508 00:31:10.400620 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.400693 kubelet[2224]: E0508 00:31:10.400680 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.400941 kubelet[2224]: E0508 00:31:10.400928 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.401021 kubelet[2224]: W0508 00:31:10.401008 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.401077 kubelet[2224]: E0508 00:31:10.401066 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.401258 kubelet[2224]: E0508 00:31:10.401246 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.401359 kubelet[2224]: W0508 00:31:10.401344 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.401432 kubelet[2224]: E0508 00:31:10.401420 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.401643 kubelet[2224]: E0508 00:31:10.401631 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.401714 kubelet[2224]: W0508 00:31:10.401702 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.401769 kubelet[2224]: E0508 00:31:10.401758 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.401971 kubelet[2224]: E0508 00:31:10.401958 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.402050 kubelet[2224]: W0508 00:31:10.402038 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.402107 kubelet[2224]: E0508 00:31:10.402095 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.402327 kubelet[2224]: E0508 00:31:10.402315 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.402410 kubelet[2224]: W0508 00:31:10.402396 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.402465 kubelet[2224]: E0508 00:31:10.402454 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.402653 kubelet[2224]: E0508 00:31:10.402642 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.402718 kubelet[2224]: W0508 00:31:10.402706 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.402785 kubelet[2224]: E0508 00:31:10.402774 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.402978 kubelet[2224]: E0508 00:31:10.402966 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.403055 kubelet[2224]: W0508 00:31:10.403043 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.403110 kubelet[2224]: E0508 00:31:10.403099 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.403312 kubelet[2224]: E0508 00:31:10.403301 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.403403 kubelet[2224]: W0508 00:31:10.403388 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.403460 kubelet[2224]: E0508 00:31:10.403449 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.403671 kubelet[2224]: E0508 00:31:10.403659 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.403745 kubelet[2224]: W0508 00:31:10.403733 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.403805 kubelet[2224]: E0508 00:31:10.403793 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.403995 kubelet[2224]: E0508 00:31:10.403983 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.404068 kubelet[2224]: W0508 00:31:10.404056 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.404126 kubelet[2224]: E0508 00:31:10.404116 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.404344 kubelet[2224]: E0508 00:31:10.404324 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.404417 kubelet[2224]: W0508 00:31:10.404404 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.404472 kubelet[2224]: E0508 00:31:10.404461 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.404665 kubelet[2224]: E0508 00:31:10.404653 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.404735 kubelet[2224]: W0508 00:31:10.404722 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.404790 kubelet[2224]: E0508 00:31:10.404779 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.412300 kubelet[2224]: E0508 00:31:10.412284 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.412404 kubelet[2224]: W0508 00:31:10.412388 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.412469 kubelet[2224]: E0508 00:31:10.412457 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.412704 kubelet[2224]: E0508 00:31:10.412690 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.412788 kubelet[2224]: W0508 00:31:10.412775 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.412844 kubelet[2224]: E0508 00:31:10.412834 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.413096 kubelet[2224]: E0508 00:31:10.413083 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.413168 kubelet[2224]: W0508 00:31:10.413156 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.413228 kubelet[2224]: E0508 00:31:10.413217 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.413371 kubelet[2224]: I0508 00:31:10.413354 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9stts\" (UniqueName: \"kubernetes.io/projected/f2615509-fc42-4214-b9b8-44dfb15979ff-kube-api-access-9stts\") pod \"csi-node-driver-76g2m\" (UID: \"f2615509-fc42-4214-b9b8-44dfb15979ff\") " pod="calico-system/csi-node-driver-76g2m" May 8 00:31:10.413675 kubelet[2224]: E0508 00:31:10.413636 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.413675 kubelet[2224]: W0508 00:31:10.413657 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.413675 kubelet[2224]: E0508 00:31:10.413675 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.413827 kubelet[2224]: E0508 00:31:10.413814 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.413827 kubelet[2224]: W0508 00:31:10.413825 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.413889 kubelet[2224]: E0508 00:31:10.413833 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.413991 kubelet[2224]: E0508 00:31:10.413978 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.413991 kubelet[2224]: W0508 00:31:10.413989 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.414048 kubelet[2224]: E0508 00:31:10.413996 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.414174 kubelet[2224]: E0508 00:31:10.414151 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.414174 kubelet[2224]: W0508 00:31:10.414164 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.414174 kubelet[2224]: E0508 00:31:10.414173 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.414326 kubelet[2224]: E0508 00:31:10.414316 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.414368 kubelet[2224]: W0508 00:31:10.414327 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.414368 kubelet[2224]: E0508 00:31:10.414335 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.414368 kubelet[2224]: I0508 00:31:10.414365 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f2615509-fc42-4214-b9b8-44dfb15979ff-varrun\") pod \"csi-node-driver-76g2m\" (UID: \"f2615509-fc42-4214-b9b8-44dfb15979ff\") " pod="calico-system/csi-node-driver-76g2m" May 8 00:31:10.414520 kubelet[2224]: E0508 00:31:10.414507 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.414520 kubelet[2224]: W0508 00:31:10.414518 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.414577 kubelet[2224]: E0508 00:31:10.414526 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.414577 kubelet[2224]: I0508 00:31:10.414539 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2615509-fc42-4214-b9b8-44dfb15979ff-kubelet-dir\") pod \"csi-node-driver-76g2m\" (UID: \"f2615509-fc42-4214-b9b8-44dfb15979ff\") " pod="calico-system/csi-node-driver-76g2m" May 8 00:31:10.414697 kubelet[2224]: E0508 00:31:10.414683 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.414697 kubelet[2224]: W0508 00:31:10.414695 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.414748 kubelet[2224]: E0508 00:31:10.414704 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.414748 kubelet[2224]: I0508 00:31:10.414717 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f2615509-fc42-4214-b9b8-44dfb15979ff-registration-dir\") pod \"csi-node-driver-76g2m\" (UID: \"f2615509-fc42-4214-b9b8-44dfb15979ff\") " pod="calico-system/csi-node-driver-76g2m" May 8 00:31:10.414861 kubelet[2224]: E0508 00:31:10.414844 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.414861 kubelet[2224]: W0508 00:31:10.414856 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.414914 kubelet[2224]: E0508 00:31:10.414865 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.415015 kubelet[2224]: E0508 00:31:10.415005 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.415015 kubelet[2224]: W0508 00:31:10.415015 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.415071 kubelet[2224]: E0508 00:31:10.415024 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.415071 kubelet[2224]: I0508 00:31:10.415038 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f2615509-fc42-4214-b9b8-44dfb15979ff-socket-dir\") pod \"csi-node-driver-76g2m\" (UID: \"f2615509-fc42-4214-b9b8-44dfb15979ff\") " pod="calico-system/csi-node-driver-76g2m" May 8 00:31:10.415209 kubelet[2224]: E0508 00:31:10.415198 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.415209 kubelet[2224]: W0508 00:31:10.415209 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.415297 kubelet[2224]: E0508 00:31:10.415220 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.415366 kubelet[2224]: E0508 00:31:10.415355 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.415366 kubelet[2224]: W0508 00:31:10.415365 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.415440 kubelet[2224]: E0508 00:31:10.415379 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.415536 kubelet[2224]: E0508 00:31:10.415527 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.415536 kubelet[2224]: W0508 00:31:10.415536 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.415605 kubelet[2224]: E0508 00:31:10.415548 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.415683 kubelet[2224]: E0508 00:31:10.415672 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.415683 kubelet[2224]: W0508 00:31:10.415682 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.415749 kubelet[2224]: E0508 00:31:10.415689 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.415830 kubelet[2224]: E0508 00:31:10.415820 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.415830 kubelet[2224]: W0508 00:31:10.415830 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.415898 kubelet[2224]: E0508 00:31:10.415842 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.415974 kubelet[2224]: E0508 00:31:10.415962 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.415974 kubelet[2224]: W0508 00:31:10.415972 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.416039 kubelet[2224]: E0508 00:31:10.415979 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.416110 kubelet[2224]: E0508 00:31:10.416101 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.416110 kubelet[2224]: W0508 00:31:10.416110 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.416173 kubelet[2224]: E0508 00:31:10.416118 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.416242 kubelet[2224]: E0508 00:31:10.416233 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.416242 kubelet[2224]: W0508 00:31:10.416242 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.416315 kubelet[2224]: E0508 00:31:10.416249 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.515754 kubelet[2224]: E0508 00:31:10.515721 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.515906 kubelet[2224]: W0508 00:31:10.515890 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.515975 kubelet[2224]: E0508 00:31:10.515961 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.516255 kubelet[2224]: E0508 00:31:10.516242 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.516367 kubelet[2224]: W0508 00:31:10.516352 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.516467 kubelet[2224]: E0508 00:31:10.516453 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.516760 kubelet[2224]: E0508 00:31:10.516726 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.516760 kubelet[2224]: W0508 00:31:10.516748 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.516837 kubelet[2224]: E0508 00:31:10.516774 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.516964 kubelet[2224]: E0508 00:31:10.516953 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.516964 kubelet[2224]: W0508 00:31:10.516964 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.517034 kubelet[2224]: E0508 00:31:10.516977 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.517130 kubelet[2224]: E0508 00:31:10.517116 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.517130 kubelet[2224]: W0508 00:31:10.517127 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.517186 kubelet[2224]: E0508 00:31:10.517135 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.517425 kubelet[2224]: E0508 00:31:10.517412 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.517425 kubelet[2224]: W0508 00:31:10.517425 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.517503 kubelet[2224]: E0508 00:31:10.517441 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.517616 kubelet[2224]: E0508 00:31:10.517606 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.517647 kubelet[2224]: W0508 00:31:10.517616 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.517647 kubelet[2224]: E0508 00:31:10.517624 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.517796 kubelet[2224]: E0508 00:31:10.517784 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.517796 kubelet[2224]: W0508 00:31:10.517795 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.517869 kubelet[2224]: E0508 00:31:10.517804 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.517959 kubelet[2224]: E0508 00:31:10.517950 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.517959 kubelet[2224]: W0508 00:31:10.517960 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.518023 kubelet[2224]: E0508 00:31:10.517971 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.518129 kubelet[2224]: E0508 00:31:10.518118 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.518129 kubelet[2224]: W0508 00:31:10.518129 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.518197 kubelet[2224]: E0508 00:31:10.518142 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.518309 kubelet[2224]: E0508 00:31:10.518295 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.518309 kubelet[2224]: W0508 00:31:10.518306 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.518390 kubelet[2224]: E0508 00:31:10.518315 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.518471 kubelet[2224]: E0508 00:31:10.518458 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.518471 kubelet[2224]: W0508 00:31:10.518469 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.518540 kubelet[2224]: E0508 00:31:10.518478 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.518610 kubelet[2224]: E0508 00:31:10.518602 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.518646 kubelet[2224]: W0508 00:31:10.518611 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.518646 kubelet[2224]: E0508 00:31:10.518624 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.518764 kubelet[2224]: E0508 00:31:10.518748 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.518764 kubelet[2224]: W0508 00:31:10.518758 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.518870 kubelet[2224]: E0508 00:31:10.518766 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.519002 kubelet[2224]: E0508 00:31:10.518992 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.519002 kubelet[2224]: W0508 00:31:10.519002 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.519075 kubelet[2224]: E0508 00:31:10.519015 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.519166 kubelet[2224]: E0508 00:31:10.519149 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.519166 kubelet[2224]: W0508 00:31:10.519159 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.519166 kubelet[2224]: E0508 00:31:10.519167 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.519302 kubelet[2224]: E0508 00:31:10.519292 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.519302 kubelet[2224]: W0508 00:31:10.519302 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.519428 kubelet[2224]: E0508 00:31:10.519405 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.519511 kubelet[2224]: E0508 00:31:10.519424 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.519575 kubelet[2224]: W0508 00:31:10.519563 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.519708 kubelet[2224]: E0508 00:31:10.519687 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.519888 kubelet[2224]: E0508 00:31:10.519876 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.519951 kubelet[2224]: W0508 00:31:10.519939 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.520025 kubelet[2224]: E0508 00:31:10.520013 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.520225 kubelet[2224]: E0508 00:31:10.520214 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.520315 kubelet[2224]: W0508 00:31:10.520301 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.520404 kubelet[2224]: E0508 00:31:10.520391 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.520639 kubelet[2224]: E0508 00:31:10.520626 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.520712 kubelet[2224]: W0508 00:31:10.520700 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.520851 kubelet[2224]: E0508 00:31:10.520829 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.521022 kubelet[2224]: E0508 00:31:10.521010 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.521095 kubelet[2224]: W0508 00:31:10.521082 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.521207 kubelet[2224]: E0508 00:31:10.521189 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.521436 kubelet[2224]: E0508 00:31:10.521422 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.521520 kubelet[2224]: W0508 00:31:10.521507 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.521624 kubelet[2224]: E0508 00:31:10.521607 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.521799 kubelet[2224]: E0508 00:31:10.521787 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.521864 kubelet[2224]: W0508 00:31:10.521852 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.521943 kubelet[2224]: E0508 00:31:10.521932 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.522143 kubelet[2224]: E0508 00:31:10.522131 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.522210 kubelet[2224]: W0508 00:31:10.522198 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.522317 kubelet[2224]: E0508 00:31:10.522301 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.522519 kubelet[2224]: E0508 00:31:10.522505 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.522587 kubelet[2224]: W0508 00:31:10.522574 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.522658 kubelet[2224]: E0508 00:31:10.522646 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.523004 kubelet[2224]: E0508 00:31:10.522991 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.523089 kubelet[2224]: W0508 00:31:10.523076 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.523150 kubelet[2224]: E0508 00:31:10.523140 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.523364 kubelet[2224]: E0508 00:31:10.523351 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.523437 kubelet[2224]: W0508 00:31:10.523425 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.523511 kubelet[2224]: E0508 00:31:10.523500 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.523736 kubelet[2224]: E0508 00:31:10.523724 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.523817 kubelet[2224]: W0508 00:31:10.523803 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.523878 kubelet[2224]: E0508 00:31:10.523868 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.524166 kubelet[2224]: E0508 00:31:10.524152 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.524243 kubelet[2224]: W0508 00:31:10.524229 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.524325 kubelet[2224]: E0508 00:31:10.524311 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.619813 kubelet[2224]: E0508 00:31:10.619706 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.619813 kubelet[2224]: W0508 00:31:10.619732 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.619813 kubelet[2224]: E0508 00:31:10.619753 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.620000 kubelet[2224]: E0508 00:31:10.619924 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.620000 kubelet[2224]: W0508 00:31:10.619933 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.620000 kubelet[2224]: E0508 00:31:10.619942 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.622220 kubelet[2224]: E0508 00:31:10.620470 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.622220 kubelet[2224]: W0508 00:31:10.620485 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.622220 kubelet[2224]: E0508 00:31:10.620589 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.622220 kubelet[2224]: E0508 00:31:10.621164 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.622220 kubelet[2224]: W0508 00:31:10.621249 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.622220 kubelet[2224]: E0508 00:31:10.621372 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.622220 kubelet[2224]: E0508 00:31:10.622024 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.622220 kubelet[2224]: W0508 00:31:10.622036 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.622220 kubelet[2224]: E0508 00:31:10.622048 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.622519 kubelet[2224]: E0508 00:31:10.622440 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.622546 kubelet[2224]: W0508 00:31:10.622525 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.622546 kubelet[2224]: E0508 00:31:10.622541 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.684000 audit[2687]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:10.684000 audit[2687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffca6c91a0 a2=0 a3=1 items=0 ppid=2386 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:10.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:10.700000 audit[2687]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:10.700000 audit[2687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca6c91a0 a2=0 a3=1 items=0 ppid=2386 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:10.700000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:10.723458 kubelet[2224]: E0508 00:31:10.723434 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.723458 kubelet[2224]: W0508 00:31:10.723455 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.723592 kubelet[2224]: E0508 00:31:10.723475 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.723717 kubelet[2224]: E0508 00:31:10.723704 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.723717 kubelet[2224]: W0508 00:31:10.723716 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.723780 kubelet[2224]: E0508 00:31:10.723726 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.723935 kubelet[2224]: E0508 00:31:10.723923 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.723935 kubelet[2224]: W0508 00:31:10.723935 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.724001 kubelet[2224]: E0508 00:31:10.723944 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.724117 kubelet[2224]: E0508 00:31:10.724106 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.724117 kubelet[2224]: W0508 00:31:10.724117 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.724183 kubelet[2224]: E0508 00:31:10.724126 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.724296 kubelet[2224]: E0508 00:31:10.724284 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.724296 kubelet[2224]: W0508 00:31:10.724295 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.724374 kubelet[2224]: E0508 00:31:10.724304 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.724500 kubelet[2224]: E0508 00:31:10.724490 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.724532 kubelet[2224]: W0508 00:31:10.724501 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.724532 kubelet[2224]: E0508 00:31:10.724510 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.827922 kubelet[2224]: E0508 00:31:10.827879 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.827922 kubelet[2224]: W0508 00:31:10.827907 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.827922 kubelet[2224]: E0508 00:31:10.827928 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.828148 kubelet[2224]: E0508 00:31:10.828127 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.828148 kubelet[2224]: W0508 00:31:10.828136 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.828148 kubelet[2224]: E0508 00:31:10.828146 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.828346 kubelet[2224]: E0508 00:31:10.828330 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.828346 kubelet[2224]: W0508 00:31:10.828342 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.828422 kubelet[2224]: E0508 00:31:10.828359 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.828601 kubelet[2224]: E0508 00:31:10.828579 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.828601 kubelet[2224]: W0508 00:31:10.828590 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.828601 kubelet[2224]: E0508 00:31:10.828600 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.828769 kubelet[2224]: E0508 00:31:10.828752 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.828769 kubelet[2224]: W0508 00:31:10.828763 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.828835 kubelet[2224]: E0508 00:31:10.828772 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.828939 kubelet[2224]: E0508 00:31:10.828905 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.828939 kubelet[2224]: W0508 00:31:10.828915 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.828939 kubelet[2224]: E0508 00:31:10.828922 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.904498 kubelet[2224]: E0508 00:31:10.904417 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.904631 kubelet[2224]: W0508 00:31:10.904614 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.904697 kubelet[2224]: E0508 00:31:10.904685 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.930117 kubelet[2224]: E0508 00:31:10.930019 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.930117 kubelet[2224]: W0508 00:31:10.930047 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.930117 kubelet[2224]: E0508 00:31:10.930066 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.930339 kubelet[2224]: E0508 00:31:10.930242 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.930339 kubelet[2224]: W0508 00:31:10.930252 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.930339 kubelet[2224]: E0508 00:31:10.930276 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.930452 kubelet[2224]: E0508 00:31:10.930435 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.930452 kubelet[2224]: W0508 00:31:10.930449 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.930510 kubelet[2224]: E0508 00:31:10.930459 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.930677 kubelet[2224]: E0508 00:31:10.930666 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.930677 kubelet[2224]: W0508 00:31:10.930676 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.930741 kubelet[2224]: E0508 00:31:10.930684 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:10.930839 kubelet[2224]: E0508 00:31:10.930829 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:10.930839 kubelet[2224]: W0508 00:31:10.930839 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:10.930892 kubelet[2224]: E0508 00:31:10.930846 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.031627 kubelet[2224]: E0508 00:31:11.031598 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.031771 kubelet[2224]: W0508 00:31:11.031755 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.031837 kubelet[2224]: E0508 00:31:11.031823 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.032155 kubelet[2224]: E0508 00:31:11.032143 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.032242 kubelet[2224]: W0508 00:31:11.032228 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.032317 kubelet[2224]: E0508 00:31:11.032304 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.032571 kubelet[2224]: E0508 00:31:11.032558 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.032645 kubelet[2224]: W0508 00:31:11.032632 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.032702 kubelet[2224]: E0508 00:31:11.032691 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.032965 kubelet[2224]: E0508 00:31:11.032952 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.033037 kubelet[2224]: W0508 00:31:11.033025 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.033106 kubelet[2224]: E0508 00:31:11.033094 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.034018 kubelet[2224]: E0508 00:31:11.034001 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.034123 kubelet[2224]: W0508 00:31:11.034108 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.034181 kubelet[2224]: E0508 00:31:11.034170 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.040091 kubelet[2224]: E0508 00:31:11.040057 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.040091 kubelet[2224]: W0508 00:31:11.040078 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.040091 kubelet[2224]: E0508 00:31:11.040091 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.043287 kubelet[2224]: E0508 00:31:11.043262 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.043567 kubelet[2224]: W0508 00:31:11.043549 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.043629 kubelet[2224]: E0508 00:31:11.043616 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.045414 kubelet[2224]: E0508 00:31:11.045386 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.045414 kubelet[2224]: W0508 00:31:11.045405 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.045496 kubelet[2224]: E0508 00:31:11.045418 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.134735 kubelet[2224]: E0508 00:31:11.134705 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.134735 kubelet[2224]: W0508 00:31:11.134727 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.134879 kubelet[2224]: E0508 00:31:11.134747 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.134986 kubelet[2224]: E0508 00:31:11.134971 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.135018 kubelet[2224]: W0508 00:31:11.134985 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.135018 kubelet[2224]: E0508 00:31:11.134996 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.171884 kubelet[2224]: E0508 00:31:11.171798 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.171884 kubelet[2224]: W0508 00:31:11.171820 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.171884 kubelet[2224]: E0508 00:31:11.171837 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.173498 kubelet[2224]: E0508 00:31:11.173478 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:11.173498 kubelet[2224]: W0508 00:31:11.173496 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:11.173585 kubelet[2224]: E0508 00:31:11.173510 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:11.248658 kubelet[2224]: E0508 00:31:11.248618 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:11.249128 env[1315]: time="2025-05-08T00:31:11.249060955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb45c76c7-chgs7,Uid:0abf581b-49da-4047-b094-be9724eb9230,Namespace:calico-system,Attempt:0,}" May 8 00:31:11.266977 env[1315]: time="2025-05-08T00:31:11.266908985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:11.267220 env[1315]: time="2025-05-08T00:31:11.267193796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:11.267325 env[1315]: time="2025-05-08T00:31:11.267301680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:11.267583 env[1315]: time="2025-05-08T00:31:11.267555010Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa2ae42d26de1fff34c8f802d513a6ea7ab8bb86c1ec7e7487539b75a0a51800 pid=2731 runtime=io.containerd.runc.v2 May 8 00:31:11.304299 kubelet[2224]: E0508 00:31:11.304184 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:11.307579 env[1315]: time="2025-05-08T00:31:11.307534360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gr7rq,Uid:941ee7e7-d02f-426f-80dc-e0162e58774f,Namespace:calico-system,Attempt:0,}" May 8 00:31:11.325966 env[1315]: time="2025-05-08T00:31:11.325883770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:11.326143 env[1315]: time="2025-05-08T00:31:11.325957613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:11.326143 env[1315]: time="2025-05-08T00:31:11.325970053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:11.326226 env[1315]: time="2025-05-08T00:31:11.326158741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fae1c02732df1834ebccd22567be536f9950f5402a1a9228cf71bdc4f6eacd0f pid=2766 runtime=io.containerd.runc.v2 May 8 00:31:11.335083 env[1315]: time="2025-05-08T00:31:11.335033734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb45c76c7-chgs7,Uid:0abf581b-49da-4047-b094-be9724eb9230,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa2ae42d26de1fff34c8f802d513a6ea7ab8bb86c1ec7e7487539b75a0a51800\"" May 8 00:31:11.335952 kubelet[2224]: E0508 00:31:11.335928 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:11.339383 env[1315]: time="2025-05-08T00:31:11.339339025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:31:11.351127 systemd[1]: run-containerd-runc-k8s.io-fae1c02732df1834ebccd22567be536f9950f5402a1a9228cf71bdc4f6eacd0f-runc.6t3REf.mount: Deactivated successfully. May 8 00:31:11.421834 env[1315]: time="2025-05-08T00:31:11.421786664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gr7rq,Uid:941ee7e7-d02f-426f-80dc-e0162e58774f,Namespace:calico-system,Attempt:0,} returns sandbox id \"fae1c02732df1834ebccd22567be536f9950f5402a1a9228cf71bdc4f6eacd0f\"" May 8 00:31:11.422967 kubelet[2224]: E0508 00:31:11.422470 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:12.072411 kubelet[2224]: E0508 00:31:12.072366 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76g2m" podUID="f2615509-fc42-4214-b9b8-44dfb15979ff" May 8 00:31:13.184175 env[1315]: time="2025-05-08T00:31:13.181163538Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:13.184175 env[1315]: time="2025-05-08T00:31:13.183139210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:13.185566 env[1315]: time="2025-05-08T00:31:13.185489776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:13.186070 env[1315]: time="2025-05-08T00:31:13.186029716Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:13.187104 env[1315]: time="2025-05-08T00:31:13.187060034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 8 00:31:13.191573 env[1315]: time="2025-05-08T00:31:13.191541758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:31:13.222496 env[1315]: time="2025-05-08T00:31:13.222450089Z" level=info msg="CreateContainer within sandbox \"fa2ae42d26de1fff34c8f802d513a6ea7ab8bb86c1ec7e7487539b75a0a51800\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:31:13.249320 env[1315]: time="2025-05-08T00:31:13.249259710Z" level=info msg="CreateContainer within sandbox \"fa2ae42d26de1fff34c8f802d513a6ea7ab8bb86c1ec7e7487539b75a0a51800\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"06f779d77d9a74698ecffca36e4f6a0d3771a1fc003ad55a5ecc925a82e05d26\"" May 8 00:31:13.251861 env[1315]: time="2025-05-08T00:31:13.251817644Z" level=info msg="StartContainer for \"06f779d77d9a74698ecffca36e4f6a0d3771a1fc003ad55a5ecc925a82e05d26\"" May 8 00:31:13.318832 env[1315]: time="2025-05-08T00:31:13.318783014Z" level=info msg="StartContainer for \"06f779d77d9a74698ecffca36e4f6a0d3771a1fc003ad55a5ecc925a82e05d26\" returns successfully" May 8 00:31:14.072216 kubelet[2224]: E0508 00:31:14.072170 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76g2m" podUID="f2615509-fc42-4214-b9b8-44dfb15979ff" May 8 00:31:14.132524 kubelet[2224]: E0508 00:31:14.132492 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:14.228970 kubelet[2224]: E0508 00:31:14.228928 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.228970 kubelet[2224]: W0508 00:31:14.228954 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.228970 kubelet[2224]: E0508 00:31:14.228975 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.229165 kubelet[2224]: E0508 00:31:14.229115 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.229165 kubelet[2224]: W0508 00:31:14.229123 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.229165 kubelet[2224]: E0508 00:31:14.229131 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.229302 kubelet[2224]: E0508 00:31:14.229284 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.229302 kubelet[2224]: W0508 00:31:14.229296 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.229370 kubelet[2224]: E0508 00:31:14.229305 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.229481 kubelet[2224]: E0508 00:31:14.229453 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.229481 kubelet[2224]: W0508 00:31:14.229472 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.229481 kubelet[2224]: E0508 00:31:14.229481 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.229656 kubelet[2224]: E0508 00:31:14.229642 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.229656 kubelet[2224]: W0508 00:31:14.229655 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.229722 kubelet[2224]: E0508 00:31:14.229664 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.229790 kubelet[2224]: E0508 00:31:14.229779 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.229790 kubelet[2224]: W0508 00:31:14.229788 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.229854 kubelet[2224]: E0508 00:31:14.229797 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.229925 kubelet[2224]: E0508 00:31:14.229911 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.229925 kubelet[2224]: W0508 00:31:14.229921 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.229991 kubelet[2224]: E0508 00:31:14.229929 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.230060 kubelet[2224]: E0508 00:31:14.230046 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.230060 kubelet[2224]: W0508 00:31:14.230055 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.230123 kubelet[2224]: E0508 00:31:14.230063 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.230203 kubelet[2224]: E0508 00:31:14.230185 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.230203 kubelet[2224]: W0508 00:31:14.230196 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.230203 kubelet[2224]: E0508 00:31:14.230205 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.230345 kubelet[2224]: E0508 00:31:14.230328 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.230345 kubelet[2224]: W0508 00:31:14.230340 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.230409 kubelet[2224]: E0508 00:31:14.230348 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.230884 kubelet[2224]: E0508 00:31:14.230460 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.230884 kubelet[2224]: W0508 00:31:14.230477 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.230884 kubelet[2224]: E0508 00:31:14.230486 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.230884 kubelet[2224]: E0508 00:31:14.230603 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.230884 kubelet[2224]: W0508 00:31:14.230609 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.230884 kubelet[2224]: E0508 00:31:14.230616 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.230884 kubelet[2224]: E0508 00:31:14.230734 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.230884 kubelet[2224]: W0508 00:31:14.230740 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.230884 kubelet[2224]: E0508 00:31:14.230748 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.230884 kubelet[2224]: E0508 00:31:14.230877 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.231162 kubelet[2224]: W0508 00:31:14.230884 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.231162 kubelet[2224]: E0508 00:31:14.230892 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.231162 kubelet[2224]: E0508 00:31:14.231040 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.231162 kubelet[2224]: W0508 00:31:14.231048 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.231162 kubelet[2224]: E0508 00:31:14.231055 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.259080 kubelet[2224]: E0508 00:31:14.259059 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.259080 kubelet[2224]: W0508 00:31:14.259077 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.259221 kubelet[2224]: E0508 00:31:14.259091 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.259305 kubelet[2224]: E0508 00:31:14.259290 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.259305 kubelet[2224]: W0508 00:31:14.259305 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.259368 kubelet[2224]: E0508 00:31:14.259320 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.259516 kubelet[2224]: E0508 00:31:14.259504 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.259516 kubelet[2224]: W0508 00:31:14.259516 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.259588 kubelet[2224]: E0508 00:31:14.259531 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.259754 kubelet[2224]: E0508 00:31:14.259731 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.259754 kubelet[2224]: W0508 00:31:14.259745 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.259815 kubelet[2224]: E0508 00:31:14.259762 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.259915 kubelet[2224]: E0508 00:31:14.259900 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.259915 kubelet[2224]: W0508 00:31:14.259910 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.259977 kubelet[2224]: E0508 00:31:14.259923 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.260057 kubelet[2224]: E0508 00:31:14.260046 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.260057 kubelet[2224]: W0508 00:31:14.260057 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.260117 kubelet[2224]: E0508 00:31:14.260065 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.260219 kubelet[2224]: E0508 00:31:14.260210 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.260219 kubelet[2224]: W0508 00:31:14.260219 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.260302 kubelet[2224]: E0508 00:31:14.260232 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.260446 kubelet[2224]: E0508 00:31:14.260426 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.260490 kubelet[2224]: W0508 00:31:14.260448 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.260490 kubelet[2224]: E0508 00:31:14.260475 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.260657 kubelet[2224]: E0508 00:31:14.260645 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.260698 kubelet[2224]: W0508 00:31:14.260658 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.260732 kubelet[2224]: E0508 00:31:14.260685 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.260820 kubelet[2224]: E0508 00:31:14.260808 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.260851 kubelet[2224]: W0508 00:31:14.260820 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.260851 kubelet[2224]: E0508 00:31:14.260841 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.260973 kubelet[2224]: E0508 00:31:14.260963 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.261009 kubelet[2224]: W0508 00:31:14.260975 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.261009 kubelet[2224]: E0508 00:31:14.260990 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.261147 kubelet[2224]: E0508 00:31:14.261136 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.261183 kubelet[2224]: W0508 00:31:14.261148 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.261183 kubelet[2224]: E0508 00:31:14.261161 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.261331 kubelet[2224]: E0508 00:31:14.261319 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.261373 kubelet[2224]: W0508 00:31:14.261331 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.261373 kubelet[2224]: E0508 00:31:14.261345 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.261565 kubelet[2224]: E0508 00:31:14.261552 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.261603 kubelet[2224]: W0508 00:31:14.261565 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.261603 kubelet[2224]: E0508 00:31:14.261580 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.261721 kubelet[2224]: E0508 00:31:14.261712 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.261721 kubelet[2224]: W0508 00:31:14.261721 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.261774 kubelet[2224]: E0508 00:31:14.261733 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.261918 kubelet[2224]: E0508 00:31:14.261907 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.261957 kubelet[2224]: W0508 00:31:14.261921 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.261957 kubelet[2224]: E0508 00:31:14.261936 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.262167 kubelet[2224]: E0508 00:31:14.262153 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.262198 kubelet[2224]: W0508 00:31:14.262168 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.262198 kubelet[2224]: E0508 00:31:14.262179 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.262342 kubelet[2224]: E0508 00:31:14.262330 2224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:31:14.262382 kubelet[2224]: W0508 00:31:14.262343 2224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:31:14.262382 kubelet[2224]: E0508 00:31:14.262354 2224 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:31:14.314441 env[1315]: time="2025-05-08T00:31:14.314393315Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:14.316967 env[1315]: time="2025-05-08T00:31:14.316939244Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:14.318369 env[1315]: time="2025-05-08T00:31:14.318343814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:14.325768 env[1315]: time="2025-05-08T00:31:14.323502875Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:14.325768 env[1315]: time="2025-05-08T00:31:14.324773800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 8 00:31:14.329359 env[1315]: time="2025-05-08T00:31:14.329328280Z" level=info msg="CreateContainer within sandbox \"fae1c02732df1834ebccd22567be536f9950f5402a1a9228cf71bdc4f6eacd0f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:31:14.349500 env[1315]: time="2025-05-08T00:31:14.349449667Z" level=info msg="CreateContainer within sandbox \"fae1c02732df1834ebccd22567be536f9950f5402a1a9228cf71bdc4f6eacd0f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"48ca3f722111ae49e418327b9f3ed68c8dda7462be6bcd03cb9fc9eaa8369072\"" May 8 00:31:14.350214 env[1315]: time="2025-05-08T00:31:14.350184213Z" level=info msg="StartContainer for \"48ca3f722111ae49e418327b9f3ed68c8dda7462be6bcd03cb9fc9eaa8369072\"" May 8 00:31:14.432294 env[1315]: time="2025-05-08T00:31:14.432236257Z" level=info msg="StartContainer for \"48ca3f722111ae49e418327b9f3ed68c8dda7462be6bcd03cb9fc9eaa8369072\" returns successfully" May 8 00:31:14.475134 env[1315]: time="2025-05-08T00:31:14.475088964Z" level=info msg="shim disconnected" id=48ca3f722111ae49e418327b9f3ed68c8dda7462be6bcd03cb9fc9eaa8369072 May 8 00:31:14.475134 env[1315]: time="2025-05-08T00:31:14.475134285Z" level=warning msg="cleaning up after shim disconnected" id=48ca3f722111ae49e418327b9f3ed68c8dda7462be6bcd03cb9fc9eaa8369072 namespace=k8s.io May 8 00:31:14.475440 env[1315]: time="2025-05-08T00:31:14.475143406Z" level=info msg="cleaning up dead shim" May 8 00:31:14.484258 env[1315]: time="2025-05-08T00:31:14.484208004Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:31:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2926 runtime=io.containerd.runc.v2\n" May 8 00:31:14.594855 kubelet[2224]: I0508 00:31:14.594465 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bb45c76c7-chgs7" podStartSLOduration=2.742875968 podStartE2EDuration="4.594448559s" podCreationTimestamp="2025-05-08 00:31:10 +0000 UTC" firstStartedPulling="2025-05-08 00:31:11.338944569 +0000 UTC m=+22.355623863" lastFinishedPulling="2025-05-08 00:31:13.19051712 +0000 UTC m=+24.207196454" observedRunningTime="2025-05-08 00:31:14.148970099 +0000 UTC m=+25.165649433" watchObservedRunningTime="2025-05-08 00:31:14.594448559 +0000 UTC m=+25.611127894" May 8 00:31:14.610000 audit[2945]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:14.610000 audit[2945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffec6d13e0 a2=0 a3=1 items=0 ppid=2386 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:14.610000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:14.617000 audit[2945]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:14.617000 audit[2945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffec6d13e0 a2=0 a3=1 items=0 ppid=2386 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:14.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:15.135845 kubelet[2224]: E0508 00:31:15.135814 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:15.136249 kubelet[2224]: E0508 00:31:15.135908 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:15.136839 env[1315]: time="2025-05-08T00:31:15.136797442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:31:15.195710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48ca3f722111ae49e418327b9f3ed68c8dda7462be6bcd03cb9fc9eaa8369072-rootfs.mount: Deactivated successfully. May 8 00:31:16.072129 kubelet[2224]: E0508 00:31:16.072073 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76g2m" podUID="f2615509-fc42-4214-b9b8-44dfb15979ff" May 8 00:31:16.137047 kubelet[2224]: E0508 00:31:16.137010 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:17.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.15:22-10.0.0.1:46558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:17.174163 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:46558.service. May 8 00:31:17.177040 kernel: kauditd_printk_skb: 14 callbacks suppressed May 8 00:31:17.177112 kernel: audit: type=1130 audit(1746664277.174:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.15:22-10.0.0.1:46558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:17.221000 audit[2951]: USER_ACCT pid=2951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.221989 sshd[2951]: Accepted publickey for core from 10.0.0.1 port 46558 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:17.222000 audit[2951]: CRED_ACQ pid=2951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.224635 sshd[2951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:17.226595 kernel: audit: type=1101 audit(1746664277.221:291): pid=2951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.226641 kernel: audit: type=1103 audit(1746664277.222:292): pid=2951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.228145 kernel: audit: type=1006 audit(1746664277.223:293): pid=2951 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 May 8 00:31:17.223000 audit[2951]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcfea1010 a2=3 a3=1 items=0 ppid=1 pid=2951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:17.230574 kernel: audit: type=1300 audit(1746664277.223:293): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcfea1010 a2=3 a3=1 items=0 ppid=1 pid=2951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:17.223000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:17.231456 kernel: audit: type=1327 audit(1746664277.223:293): proctitle=737368643A20636F7265205B707269765D May 8 00:31:17.232797 systemd-logind[1297]: New session 8 of user core. May 8 00:31:17.233611 systemd[1]: Started session-8.scope. May 8 00:31:17.237000 audit[2951]: USER_START pid=2951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.240000 audit[2954]: CRED_ACQ pid=2954 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.242827 kernel: audit: type=1105 audit(1746664277.237:294): pid=2951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.242875 kernel: audit: type=1103 audit(1746664277.240:295): pid=2954 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.354493 sshd[2951]: pam_unix(sshd:session): session closed for user core May 8 00:31:17.355000 audit[2951]: USER_END pid=2951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.358691 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:46558.service: Deactivated successfully. May 8 00:31:17.359778 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:31:17.355000 audit[2951]: CRED_DISP pid=2951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.359947 systemd-logind[1297]: Session 8 logged out. Waiting for processes to exit. May 8 00:31:17.362259 kernel: audit: type=1106 audit(1746664277.355:296): pid=2951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.362353 kernel: audit: type=1104 audit(1746664277.355:297): pid=2951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:17.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.15:22-10.0.0.1:46558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:17.363492 systemd-logind[1297]: Removed session 8. May 8 00:31:18.073189 kubelet[2224]: E0508 00:31:18.071715 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76g2m" podUID="f2615509-fc42-4214-b9b8-44dfb15979ff" May 8 00:31:18.689824 env[1315]: time="2025-05-08T00:31:18.689774970Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:18.690959 env[1315]: time="2025-05-08T00:31:18.690927125Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:18.692689 env[1315]: time="2025-05-08T00:31:18.692663217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:18.694500 env[1315]: time="2025-05-08T00:31:18.694457712Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:18.695118 env[1315]: time="2025-05-08T00:31:18.695089171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 8 00:31:18.700393 env[1315]: time="2025-05-08T00:31:18.700349610Z" level=info msg="CreateContainer within sandbox \"fae1c02732df1834ebccd22567be536f9950f5402a1a9228cf71bdc4f6eacd0f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:31:18.712640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376972469.mount: Deactivated successfully. May 8 00:31:18.715289 env[1315]: time="2025-05-08T00:31:18.715211379Z" level=info msg="CreateContainer within sandbox \"fae1c02732df1834ebccd22567be536f9950f5402a1a9228cf71bdc4f6eacd0f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e9fe789b84944e3d69645774b921b86a45b07677c28a38c0424ac9dc05f3122\"" May 8 00:31:18.717190 env[1315]: time="2025-05-08T00:31:18.717157718Z" level=info msg="StartContainer for \"8e9fe789b84944e3d69645774b921b86a45b07677c28a38c0424ac9dc05f3122\"" May 8 00:31:18.894749 env[1315]: time="2025-05-08T00:31:18.894681566Z" level=info msg="StartContainer for \"8e9fe789b84944e3d69645774b921b86a45b07677c28a38c0424ac9dc05f3122\" returns successfully" May 8 00:31:19.143105 kubelet[2224]: E0508 00:31:19.143001 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:19.422615 env[1315]: time="2025-05-08T00:31:19.422480926Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:31:19.441041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e9fe789b84944e3d69645774b921b86a45b07677c28a38c0424ac9dc05f3122-rootfs.mount: Deactivated successfully. May 8 00:31:19.444130 env[1315]: time="2025-05-08T00:31:19.444087077Z" level=info msg="shim disconnected" id=8e9fe789b84944e3d69645774b921b86a45b07677c28a38c0424ac9dc05f3122 May 8 00:31:19.444322 env[1315]: time="2025-05-08T00:31:19.444303963Z" level=warning msg="cleaning up after shim disconnected" id=8e9fe789b84944e3d69645774b921b86a45b07677c28a38c0424ac9dc05f3122 namespace=k8s.io May 8 00:31:19.444398 env[1315]: time="2025-05-08T00:31:19.444384246Z" level=info msg="cleaning up dead shim" May 8 00:31:19.450640 env[1315]: time="2025-05-08T00:31:19.450591827Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:31:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3020 runtime=io.containerd.runc.v2\n" May 8 00:31:19.471085 kubelet[2224]: I0508 00:31:19.470801 2224 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:31:19.488564 kubelet[2224]: I0508 00:31:19.488514 2224 topology_manager.go:215] "Topology Admit Handler" podUID="6037af79-2659-4a47-9819-2be36a07e900" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nfk8b" May 8 00:31:19.496303 kubelet[2224]: W0508 00:31:19.493087 2224 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 8 00:31:19.496303 kubelet[2224]: E0508 00:31:19.493137 2224 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 8 00:31:19.496303 kubelet[2224]: I0508 00:31:19.495486 2224 topology_manager.go:215] "Topology Admit Handler" podUID="bce8cf8f-fe61-4c34-96ae-8a08509a41ec" podNamespace="calico-system" podName="calico-kube-controllers-756d5447f-sn9fj" May 8 00:31:19.499069 kubelet[2224]: I0508 00:31:19.499010 2224 topology_manager.go:215] "Topology Admit Handler" podUID="151f3a99-6667-4e9d-bb95-deb81c9e6f7a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j5zxd" May 8 00:31:19.499226 kubelet[2224]: I0508 00:31:19.499186 2224 topology_manager.go:215] "Topology Admit Handler" podUID="40e1a3ea-b656-44f2-891d-5f464556c5ae" podNamespace="calico-apiserver" podName="calico-apiserver-79bcdbc946-bfcrq" May 8 00:31:19.500649 kubelet[2224]: I0508 00:31:19.500619 2224 topology_manager.go:215] "Topology Admit Handler" podUID="244b8596-ee88-4b1a-879a-7c87e073db5b" podNamespace="calico-apiserver" podName="calico-apiserver-79bcdbc946-6jhfh" May 8 00:31:19.597491 kubelet[2224]: I0508 00:31:19.597451 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/151f3a99-6667-4e9d-bb95-deb81c9e6f7a-config-volume\") pod \"coredns-7db6d8ff4d-j5zxd\" (UID: \"151f3a99-6667-4e9d-bb95-deb81c9e6f7a\") " pod="kube-system/coredns-7db6d8ff4d-j5zxd" May 8 00:31:19.597491 kubelet[2224]: I0508 00:31:19.597490 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bce8cf8f-fe61-4c34-96ae-8a08509a41ec-tigera-ca-bundle\") pod \"calico-kube-controllers-756d5447f-sn9fj\" (UID: \"bce8cf8f-fe61-4c34-96ae-8a08509a41ec\") " pod="calico-system/calico-kube-controllers-756d5447f-sn9fj" May 8 00:31:19.597710 kubelet[2224]: I0508 00:31:19.597515 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmwc\" (UniqueName: \"kubernetes.io/projected/6037af79-2659-4a47-9819-2be36a07e900-kube-api-access-fxmwc\") pod \"coredns-7db6d8ff4d-nfk8b\" (UID: \"6037af79-2659-4a47-9819-2be36a07e900\") " pod="kube-system/coredns-7db6d8ff4d-nfk8b" May 8 00:31:19.597710 kubelet[2224]: I0508 00:31:19.597535 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfn48\" (UniqueName: \"kubernetes.io/projected/40e1a3ea-b656-44f2-891d-5f464556c5ae-kube-api-access-wfn48\") pod \"calico-apiserver-79bcdbc946-bfcrq\" (UID: \"40e1a3ea-b656-44f2-891d-5f464556c5ae\") " pod="calico-apiserver/calico-apiserver-79bcdbc946-bfcrq" May 8 00:31:19.597710 kubelet[2224]: I0508 00:31:19.597552 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/244b8596-ee88-4b1a-879a-7c87e073db5b-calico-apiserver-certs\") pod \"calico-apiserver-79bcdbc946-6jhfh\" (UID: \"244b8596-ee88-4b1a-879a-7c87e073db5b\") " pod="calico-apiserver/calico-apiserver-79bcdbc946-6jhfh" May 8 00:31:19.597710 kubelet[2224]: I0508 00:31:19.597569 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrfsk\" (UniqueName: \"kubernetes.io/projected/bce8cf8f-fe61-4c34-96ae-8a08509a41ec-kube-api-access-wrfsk\") pod \"calico-kube-controllers-756d5447f-sn9fj\" (UID: \"bce8cf8f-fe61-4c34-96ae-8a08509a41ec\") " pod="calico-system/calico-kube-controllers-756d5447f-sn9fj" May 8 00:31:19.597710 kubelet[2224]: I0508 00:31:19.597589 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stjsg\" (UniqueName: \"kubernetes.io/projected/151f3a99-6667-4e9d-bb95-deb81c9e6f7a-kube-api-access-stjsg\") pod \"coredns-7db6d8ff4d-j5zxd\" (UID: \"151f3a99-6667-4e9d-bb95-deb81c9e6f7a\") " pod="kube-system/coredns-7db6d8ff4d-j5zxd" May 8 00:31:19.597838 kubelet[2224]: I0508 00:31:19.597605 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40e1a3ea-b656-44f2-891d-5f464556c5ae-calico-apiserver-certs\") pod \"calico-apiserver-79bcdbc946-bfcrq\" (UID: \"40e1a3ea-b656-44f2-891d-5f464556c5ae\") " pod="calico-apiserver/calico-apiserver-79bcdbc946-bfcrq" May 8 00:31:19.597838 kubelet[2224]: I0508 00:31:19.597638 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgl7\" (UniqueName: \"kubernetes.io/projected/244b8596-ee88-4b1a-879a-7c87e073db5b-kube-api-access-wbgl7\") pod \"calico-apiserver-79bcdbc946-6jhfh\" (UID: \"244b8596-ee88-4b1a-879a-7c87e073db5b\") " pod="calico-apiserver/calico-apiserver-79bcdbc946-6jhfh" May 8 00:31:19.597838 kubelet[2224]: I0508 00:31:19.597664 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6037af79-2659-4a47-9819-2be36a07e900-config-volume\") pod \"coredns-7db6d8ff4d-nfk8b\" (UID: \"6037af79-2659-4a47-9819-2be36a07e900\") " pod="kube-system/coredns-7db6d8ff4d-nfk8b" May 8 00:31:19.816480 env[1315]: time="2025-05-08T00:31:19.816432668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756d5447f-sn9fj,Uid:bce8cf8f-fe61-4c34-96ae-8a08509a41ec,Namespace:calico-system,Attempt:0,}" May 8 00:31:19.817098 env[1315]: time="2025-05-08T00:31:19.816902922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bcdbc946-bfcrq,Uid:40e1a3ea-b656-44f2-891d-5f464556c5ae,Namespace:calico-apiserver,Attempt:0,}" May 8 00:31:19.819335 env[1315]: time="2025-05-08T00:31:19.819138467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bcdbc946-6jhfh,Uid:244b8596-ee88-4b1a-879a-7c87e073db5b,Namespace:calico-apiserver,Attempt:0,}" May 8 00:31:20.084821 env[1315]: time="2025-05-08T00:31:20.084714459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76g2m,Uid:f2615509-fc42-4214-b9b8-44dfb15979ff,Namespace:calico-system,Attempt:0,}" May 8 00:31:20.112498 env[1315]: time="2025-05-08T00:31:20.112406680Z" level=error msg="Failed to destroy network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.112978 env[1315]: time="2025-05-08T00:31:20.112928775Z" level=error msg="encountered an error cleaning up failed sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.113038 env[1315]: time="2025-05-08T00:31:20.112988897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bcdbc946-bfcrq,Uid:40e1a3ea-b656-44f2-891d-5f464556c5ae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.114003 kubelet[2224]: E0508 00:31:20.113958 2224 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.114081 kubelet[2224]: E0508 00:31:20.114031 2224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79bcdbc946-bfcrq" May 8 00:31:20.114168 env[1315]: time="2025-05-08T00:31:20.114121849Z" level=error msg="Failed to destroy network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.114757 kubelet[2224]: E0508 00:31:20.114705 2224 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79bcdbc946-bfcrq" May 8 00:31:20.114809 kubelet[2224]: E0508 00:31:20.114770 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79bcdbc946-bfcrq_calico-apiserver(40e1a3ea-b656-44f2-891d-5f464556c5ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79bcdbc946-bfcrq_calico-apiserver(40e1a3ea-b656-44f2-891d-5f464556c5ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79bcdbc946-bfcrq" podUID="40e1a3ea-b656-44f2-891d-5f464556c5ae" May 8 00:31:20.116380 env[1315]: time="2025-05-08T00:31:20.115302962Z" level=error msg="encountered an error cleaning up failed sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.116380 env[1315]: time="2025-05-08T00:31:20.115377044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756d5447f-sn9fj,Uid:bce8cf8f-fe61-4c34-96ae-8a08509a41ec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.116510 kubelet[2224]: E0508 00:31:20.115516 2224 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.116510 kubelet[2224]: E0508 00:31:20.115575 2224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-756d5447f-sn9fj" May 8 00:31:20.116510 kubelet[2224]: E0508 00:31:20.115591 2224 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-756d5447f-sn9fj" May 8 00:31:20.116599 kubelet[2224]: E0508 00:31:20.115621 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-756d5447f-sn9fj_calico-system(bce8cf8f-fe61-4c34-96ae-8a08509a41ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-756d5447f-sn9fj_calico-system(bce8cf8f-fe61-4c34-96ae-8a08509a41ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-756d5447f-sn9fj" podUID="bce8cf8f-fe61-4c34-96ae-8a08509a41ec" May 8 00:31:20.122032 env[1315]: time="2025-05-08T00:31:20.121990711Z" level=error msg="Failed to destroy network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.122333 env[1315]: time="2025-05-08T00:31:20.122304440Z" level=error msg="encountered an error cleaning up failed sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.122380 env[1315]: time="2025-05-08T00:31:20.122347561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bcdbc946-6jhfh,Uid:244b8596-ee88-4b1a-879a-7c87e073db5b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.122522 kubelet[2224]: E0508 00:31:20.122497 2224 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.122570 kubelet[2224]: E0508 00:31:20.122535 2224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79bcdbc946-6jhfh" May 8 00:31:20.122570 kubelet[2224]: E0508 00:31:20.122554 2224 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79bcdbc946-6jhfh" May 8 00:31:20.122637 kubelet[2224]: E0508 00:31:20.122591 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79bcdbc946-6jhfh_calico-apiserver(244b8596-ee88-4b1a-879a-7c87e073db5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79bcdbc946-6jhfh_calico-apiserver(244b8596-ee88-4b1a-879a-7c87e073db5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79bcdbc946-6jhfh" podUID="244b8596-ee88-4b1a-879a-7c87e073db5b" May 8 00:31:20.147190 kubelet[2224]: I0508 00:31:20.147071 2224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:20.147528 kubelet[2224]: E0508 00:31:20.147267 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:20.149691 env[1315]: time="2025-05-08T00:31:20.149636411Z" level=info msg="StopPodSandbox for \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\"" May 8 00:31:20.150145 env[1315]: time="2025-05-08T00:31:20.150113784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:31:20.151301 kubelet[2224]: I0508 00:31:20.151220 2224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:20.152045 env[1315]: time="2025-05-08T00:31:20.152015758Z" level=info msg="StopPodSandbox for \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\"" May 8 00:31:20.152870 kubelet[2224]: I0508 00:31:20.152842 2224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:20.153340 env[1315]: time="2025-05-08T00:31:20.153306954Z" level=info msg="StopPodSandbox for \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\"" May 8 00:31:20.165435 env[1315]: time="2025-05-08T00:31:20.165369895Z" level=error msg="Failed to destroy network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.165768 env[1315]: time="2025-05-08T00:31:20.165731785Z" level=error msg="encountered an error cleaning up failed sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.165830 env[1315]: time="2025-05-08T00:31:20.165783026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76g2m,Uid:f2615509-fc42-4214-b9b8-44dfb15979ff,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.166013 kubelet[2224]: E0508 00:31:20.165974 2224 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.166083 kubelet[2224]: E0508 00:31:20.166029 2224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76g2m" May 8 00:31:20.166083 kubelet[2224]: E0508 00:31:20.166062 2224 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76g2m" May 8 00:31:20.166152 kubelet[2224]: E0508 00:31:20.166102 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76g2m_calico-system(f2615509-fc42-4214-b9b8-44dfb15979ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76g2m_calico-system(f2615509-fc42-4214-b9b8-44dfb15979ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76g2m" podUID="f2615509-fc42-4214-b9b8-44dfb15979ff" May 8 00:31:20.186992 env[1315]: time="2025-05-08T00:31:20.186926983Z" level=error msg="StopPodSandbox for \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\" failed" error="failed to destroy network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.187262 kubelet[2224]: E0508 00:31:20.187212 2224 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:20.187348 kubelet[2224]: E0508 00:31:20.187300 2224 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93"} May 8 00:31:20.187385 kubelet[2224]: E0508 00:31:20.187373 2224 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"244b8596-ee88-4b1a-879a-7c87e073db5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:31:20.187438 kubelet[2224]: E0508 00:31:20.187397 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"244b8596-ee88-4b1a-879a-7c87e073db5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79bcdbc946-6jhfh" podUID="244b8596-ee88-4b1a-879a-7c87e073db5b" May 8 00:31:20.187518 env[1315]: time="2025-05-08T00:31:20.187109428Z" level=error msg="StopPodSandbox for \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\" failed" error="failed to destroy network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.187668 kubelet[2224]: E0508 00:31:20.187636 2224 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:20.187717 kubelet[2224]: E0508 00:31:20.187676 2224 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c"} May 8 00:31:20.187717 kubelet[2224]: E0508 00:31:20.187703 2224 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bce8cf8f-fe61-4c34-96ae-8a08509a41ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:31:20.187787 kubelet[2224]: E0508 00:31:20.187720 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bce8cf8f-fe61-4c34-96ae-8a08509a41ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-756d5447f-sn9fj" podUID="bce8cf8f-fe61-4c34-96ae-8a08509a41ec" May 8 00:31:20.195313 env[1315]: time="2025-05-08T00:31:20.195234698Z" level=error msg="StopPodSandbox for \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\" failed" error="failed to destroy network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.195489 kubelet[2224]: E0508 00:31:20.195447 2224 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:20.195548 kubelet[2224]: E0508 00:31:20.195499 2224 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859"} May 8 00:31:20.195548 kubelet[2224]: E0508 00:31:20.195529 2224 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40e1a3ea-b656-44f2-891d-5f464556c5ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:31:20.195642 kubelet[2224]: E0508 00:31:20.195548 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40e1a3ea-b656-44f2-891d-5f464556c5ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79bcdbc946-bfcrq" podUID="40e1a3ea-b656-44f2-891d-5f464556c5ae" May 8 00:31:20.391835 kubelet[2224]: E0508 00:31:20.391715 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:20.392342 env[1315]: time="2025-05-08T00:31:20.392303699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nfk8b,Uid:6037af79-2659-4a47-9819-2be36a07e900,Namespace:kube-system,Attempt:0,}" May 8 00:31:20.416043 kubelet[2224]: E0508 00:31:20.416010 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:20.416699 env[1315]: time="2025-05-08T00:31:20.416568783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5zxd,Uid:151f3a99-6667-4e9d-bb95-deb81c9e6f7a,Namespace:kube-system,Attempt:0,}" May 8 00:31:20.452989 env[1315]: time="2025-05-08T00:31:20.452934089Z" level=error msg="Failed to destroy network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.453517 env[1315]: time="2025-05-08T00:31:20.453482105Z" level=error msg="encountered an error cleaning up failed sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.453717 env[1315]: time="2025-05-08T00:31:20.453678430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nfk8b,Uid:6037af79-2659-4a47-9819-2be36a07e900,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.454040 kubelet[2224]: E0508 00:31:20.453997 2224 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.454114 kubelet[2224]: E0508 00:31:20.454059 2224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nfk8b" May 8 00:31:20.454114 kubelet[2224]: E0508 00:31:20.454083 2224 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nfk8b" May 8 00:31:20.454180 kubelet[2224]: E0508 00:31:20.454124 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nfk8b_kube-system(6037af79-2659-4a47-9819-2be36a07e900)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nfk8b_kube-system(6037af79-2659-4a47-9819-2be36a07e900)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nfk8b" podUID="6037af79-2659-4a47-9819-2be36a07e900" May 8 00:31:20.479036 env[1315]: time="2025-05-08T00:31:20.478934703Z" level=error msg="Failed to destroy network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.479328 env[1315]: time="2025-05-08T00:31:20.479297073Z" level=error msg="encountered an error cleaning up failed sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.479377 env[1315]: time="2025-05-08T00:31:20.479346075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5zxd,Uid:151f3a99-6667-4e9d-bb95-deb81c9e6f7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.479598 kubelet[2224]: E0508 00:31:20.479555 2224 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:20.479669 kubelet[2224]: E0508 00:31:20.479623 2224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5zxd" May 8 00:31:20.479669 kubelet[2224]: E0508 00:31:20.479652 2224 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5zxd" May 8 00:31:20.479732 kubelet[2224]: E0508 00:31:20.479697 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j5zxd_kube-system(151f3a99-6667-4e9d-bb95-deb81c9e6f7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j5zxd_kube-system(151f3a99-6667-4e9d-bb95-deb81c9e6f7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j5zxd" podUID="151f3a99-6667-4e9d-bb95-deb81c9e6f7a" May 8 00:31:21.156557 kubelet[2224]: I0508 00:31:21.156518 2224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:21.157368 env[1315]: time="2025-05-08T00:31:21.157323943Z" level=info msg="StopPodSandbox for \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\"" May 8 00:31:21.157805 kubelet[2224]: I0508 00:31:21.157744 2224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:21.159125 kubelet[2224]: I0508 00:31:21.159102 2224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:21.159475 env[1315]: time="2025-05-08T00:31:21.158496015Z" level=info msg="StopPodSandbox for \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\"" May 8 00:31:21.159627 env[1315]: time="2025-05-08T00:31:21.159602206Z" level=info msg="StopPodSandbox for \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\"" May 8 00:31:21.187659 env[1315]: time="2025-05-08T00:31:21.187595890Z" level=error msg="StopPodSandbox for \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\" failed" error="failed to destroy network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:21.188121 kubelet[2224]: E0508 00:31:21.188077 2224 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:21.188192 kubelet[2224]: E0508 00:31:21.188135 2224 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276"} May 8 00:31:21.188192 kubelet[2224]: E0508 00:31:21.188167 2224 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2615509-fc42-4214-b9b8-44dfb15979ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:31:21.188299 kubelet[2224]: E0508 00:31:21.188195 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2615509-fc42-4214-b9b8-44dfb15979ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76g2m" podUID="f2615509-fc42-4214-b9b8-44dfb15979ff" May 8 00:31:21.191724 env[1315]: time="2025-05-08T00:31:21.191682681Z" level=error msg="StopPodSandbox for \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\" failed" error="failed to destroy network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:21.191896 kubelet[2224]: E0508 00:31:21.191846 2224 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:21.191951 kubelet[2224]: E0508 00:31:21.191898 2224 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20"} May 8 00:31:21.191951 kubelet[2224]: E0508 00:31:21.191924 2224 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6037af79-2659-4a47-9819-2be36a07e900\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:31:21.192024 kubelet[2224]: E0508 00:31:21.191952 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6037af79-2659-4a47-9819-2be36a07e900\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nfk8b" podUID="6037af79-2659-4a47-9819-2be36a07e900" May 8 00:31:21.196458 env[1315]: time="2025-05-08T00:31:21.196413611Z" level=error msg="StopPodSandbox for \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\" failed" error="failed to destroy network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:31:21.196625 kubelet[2224]: E0508 00:31:21.196587 2224 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:21.196682 kubelet[2224]: E0508 00:31:21.196625 2224 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648"} May 8 00:31:21.196682 kubelet[2224]: E0508 00:31:21.196661 2224 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"151f3a99-6667-4e9d-bb95-deb81c9e6f7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:31:21.196748 kubelet[2224]: E0508 00:31:21.196683 2224 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"151f3a99-6667-4e9d-bb95-deb81c9e6f7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j5zxd" podUID="151f3a99-6667-4e9d-bb95-deb81c9e6f7a" May 8 00:31:22.358843 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:46568.service. May 8 00:31:22.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.15:22-10.0.0.1:46568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:22.359847 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:31:22.359921 kernel: audit: type=1130 audit(1746664282.357:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.15:22-10.0.0.1:46568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:22.567000 audit[3413]: USER_ACCT pid=3413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.568732 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 46568 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:22.571313 kernel: audit: type=1101 audit(1746664282.567:300): pid=3413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.570000 audit[3413]: CRED_ACQ pid=3413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.574462 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:22.575641 kernel: audit: type=1103 audit(1746664282.570:301): pid=3413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.575710 kernel: audit: type=1006 audit(1746664282.570:302): pid=3413 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 May 8 00:31:22.575734 kernel: audit: type=1300 audit(1746664282.570:302): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9e95150 a2=3 a3=1 items=0 ppid=1 pid=3413 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:22.570000 audit[3413]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9e95150 a2=3 a3=1 items=0 ppid=1 pid=3413 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:22.577922 kernel: audit: type=1327 audit(1746664282.570:302): proctitle=737368643A20636F7265205B707269765D May 8 00:31:22.570000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:22.579534 systemd[1]: Started session-9.scope. May 8 00:31:22.579741 systemd-logind[1297]: New session 9 of user core. May 8 00:31:22.584000 audit[3413]: USER_START pid=3413 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.585000 audit[3416]: CRED_ACQ pid=3416 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.590519 kernel: audit: type=1105 audit(1746664282.584:303): pid=3413 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.590598 kernel: audit: type=1103 audit(1746664282.585:304): pid=3416 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.705127 sshd[3413]: pam_unix(sshd:session): session closed for user core May 8 00:31:22.704000 audit[3413]: USER_END pid=3413 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.707622 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:46568.service: Deactivated successfully. May 8 00:31:22.708643 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:31:22.704000 audit[3413]: CRED_DISP pid=3413 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.708949 systemd-logind[1297]: Session 9 logged out. Waiting for processes to exit. May 8 00:31:22.709611 systemd-logind[1297]: Removed session 9. May 8 00:31:22.710975 kernel: audit: type=1106 audit(1746664282.704:305): pid=3413 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.711046 kernel: audit: type=1104 audit(1746664282.704:306): pid=3413 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:22.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.15:22-10.0.0.1:46568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:25.385011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431266974.mount: Deactivated successfully. May 8 00:31:25.686598 env[1315]: time="2025-05-08T00:31:25.686443031Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:25.688122 env[1315]: time="2025-05-08T00:31:25.688065430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:25.689506 env[1315]: time="2025-05-08T00:31:25.689479544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:25.691094 env[1315]: time="2025-05-08T00:31:25.690990341Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:25.691506 env[1315]: time="2025-05-08T00:31:25.691446432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 8 00:31:25.706112 env[1315]: time="2025-05-08T00:31:25.705672216Z" level=info msg="CreateContainer within sandbox \"fae1c02732df1834ebccd22567be536f9950f5402a1a9228cf71bdc4f6eacd0f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:31:25.725731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684337701.mount: Deactivated successfully. May 8 00:31:25.729393 env[1315]: time="2025-05-08T00:31:25.729348068Z" level=info msg="CreateContainer within sandbox \"fae1c02732df1834ebccd22567be536f9950f5402a1a9228cf71bdc4f6eacd0f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c2df37e6097c8619330671586743a000b3bb35278a5f5f5ea6d9ba90d21a4b92\"" May 8 00:31:25.730237 env[1315]: time="2025-05-08T00:31:25.730207689Z" level=info msg="StartContainer for \"c2df37e6097c8619330671586743a000b3bb35278a5f5f5ea6d9ba90d21a4b92\"" May 8 00:31:25.814196 env[1315]: time="2025-05-08T00:31:25.814148038Z" level=info msg="StartContainer for \"c2df37e6097c8619330671586743a000b3bb35278a5f5f5ea6d9ba90d21a4b92\" returns successfully" May 8 00:31:25.973637 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:31:25.973772 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:31:26.170405 kubelet[2224]: E0508 00:31:26.170364 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:26.184661 kubelet[2224]: I0508 00:31:26.184584 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gr7rq" podStartSLOduration=1.915375898 podStartE2EDuration="16.184567751s" podCreationTimestamp="2025-05-08 00:31:10 +0000 UTC" firstStartedPulling="2025-05-08 00:31:11.423308524 +0000 UTC m=+22.439987858" lastFinishedPulling="2025-05-08 00:31:25.692500377 +0000 UTC m=+36.709179711" observedRunningTime="2025-05-08 00:31:26.1832246 +0000 UTC m=+37.199903934" watchObservedRunningTime="2025-05-08 00:31:26.184567751 +0000 UTC m=+37.201247085" May 8 00:31:27.173102 kubelet[2224]: I0508 00:31:27.172303 2224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:31:27.173102 kubelet[2224]: E0508 00:31:27.173034 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:27.252000 audit[3559]: AVC avc: denied { write } for pid=3559 comm="tee" name="fd" dev="proc" ino=19739 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:31:27.252000 audit[3559]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff5a26a2e a2=241 a3=1b6 items=1 ppid=3516 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.252000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 8 00:31:27.252000 audit: PATH item=0 name="/dev/fd/63" inode=18215 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:31:27.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:31:27.256000 audit[3566]: AVC avc: denied { write } for pid=3566 comm="tee" name="fd" dev="proc" ino=19746 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:31:27.256000 audit[3566]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffce9eda2e a2=241 a3=1b6 items=1 ppid=3519 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.256000 audit: CWD cwd="/etc/service/enabled/confd/log" May 8 00:31:27.256000 audit: PATH item=0 name="/dev/fd/63" inode=19743 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:31:27.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:31:27.268000 audit[3568]: AVC avc: denied { write } for pid=3568 comm="tee" name="fd" dev="proc" ino=20750 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:31:27.268000 audit[3568]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffec300a30 a2=241 a3=1b6 items=1 ppid=3506 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.268000 audit: CWD cwd="/etc/service/enabled/cni/log" May 8 00:31:27.268000 audit: PATH item=0 name="/dev/fd/63" inode=20745 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:31:27.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:31:27.271000 audit[3576]: AVC avc: denied { write } for pid=3576 comm="tee" name="fd" dev="proc" ino=20754 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:31:27.271000 audit[3576]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc9ba9a2e a2=241 a3=1b6 items=1 ppid=3510 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.271000 audit: CWD cwd="/etc/service/enabled/felix/log" May 8 00:31:27.271000 audit: PATH item=0 name="/dev/fd/63" inode=18887 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:31:27.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:31:27.294000 audit[3572]: AVC avc: denied { write } for pid=3572 comm="tee" name="fd" dev="proc" ino=19760 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:31:27.294000 audit[3572]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffea976a1e a2=241 a3=1b6 items=1 ppid=3509 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.294000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 8 00:31:27.294000 audit: PATH item=0 name="/dev/fd/63" inode=19750 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:31:27.294000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:31:27.295000 audit[3583]: AVC avc: denied { write } for pid=3583 comm="tee" name="fd" dev="proc" ino=18890 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:31:27.295000 audit[3583]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcebd0a1f a2=241 a3=1b6 items=1 ppid=3508 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.295000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 8 00:31:27.295000 audit: PATH item=0 name="/dev/fd/63" inode=19757 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:31:27.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:31:27.307000 audit[3594]: AVC avc: denied { write } for pid=3594 comm="tee" name="fd" dev="proc" ino=19765 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:31:27.307000 audit[3594]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc675da2f a2=241 a3=1b6 items=1 ppid=3514 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.307000 audit: CWD cwd="/etc/service/enabled/bird/log" May 8 00:31:27.307000 audit: PATH item=0 name="/dev/fd/63" inode=19762 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:31:27.307000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:31:27.425000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.429755 kernel: kauditd_printk_skb: 36 callbacks suppressed May 8 00:31:27.429856 kernel: audit: type=1400 audit(1746664287.425:315): avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.425000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.432002 kernel: audit: type=1400 audit(1746664287.425:315): avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.425000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434105 kernel: audit: type=1400 audit(1746664287.425:315): avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434187 kernel: audit: type=1400 audit(1746664287.425:315): avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.425000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.436048 kernel: audit: type=1400 audit(1746664287.425:315): avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.425000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.425000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.439986 kernel: audit: type=1400 audit(1746664287.425:315): avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.440052 kernel: audit: type=1400 audit(1746664287.425:315): avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.425000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.425000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.447197 kernel: audit: type=1400 audit(1746664287.425:315): avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.447297 kernel: audit: type=1400 audit(1746664287.425:315): avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.425000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.448911 kernel: audit: type=1334 audit(1746664287.425:315): prog-id=10 op=LOAD May 8 00:31:27.425000 audit: BPF prog-id=10 op=LOAD May 8 00:31:27.425000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffedb9cb18 a2=98 a3=ffffedb9cb08 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.433000 audit: BPF prog-id=10 op=UNLOAD May 8 00:31:27.433000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.433000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.433000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.433000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.433000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.433000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.433000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.433000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.433000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.433000 audit: BPF prog-id=11 op=LOAD May 8 00:31:27.433000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffedb9c7a8 a2=74 a3=95 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.433000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.434000 audit: BPF prog-id=11 op=UNLOAD May 8 00:31:27.434000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.434000 audit: BPF prog-id=12 op=LOAD May 8 00:31:27.434000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffedb9c808 a2=94 a3=2 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.434000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.436000 audit: BPF prog-id=12 op=UNLOAD May 8 00:31:27.522000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit: BPF prog-id=13 op=LOAD May 8 00:31:27.522000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffedb9c7c8 a2=40 a3=ffffedb9c7f8 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.522000 audit: BPF prog-id=13 op=UNLOAD May 8 00:31:27.522000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.522000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffedb9c8e0 a2=50 a3=0 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.530000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.530000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffedb9c838 a2=28 a3=ffffedb9c968 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.530000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.530000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffedb9c868 a2=28 a3=ffffedb9c998 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.530000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.530000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffedb9c718 a2=28 a3=ffffedb9c848 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.530000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.530000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffedb9c888 a2=28 a3=ffffedb9c9b8 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.530000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.530000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffedb9c868 a2=28 a3=ffffedb9c998 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffedb9c858 a2=28 a3=ffffedb9c988 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffedb9c888 a2=28 a3=ffffedb9c9b8 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffedb9c868 a2=28 a3=ffffedb9c998 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffedb9c888 a2=28 a3=ffffedb9c9b8 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffedb9c858 a2=28 a3=ffffedb9c988 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffedb9c8d8 a2=28 a3=ffffedb9ca18 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffedb9c610 a2=50 a3=0 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit: BPF prog-id=14 op=LOAD May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffedb9c618 a2=94 a3=5 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit: BPF prog-id=14 op=UNLOAD May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffedb9c720 a2=50 a3=0 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffedb9c868 a2=4 a3=3 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { confidentiality } for pid=3627 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffedb9c848 a2=94 a3=6 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { confidentiality } for pid=3627 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffedb9c018 a2=94 a3=83 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.531000 audit[3627]: AVC avc: denied { confidentiality } for pid=3627 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:31:27.531000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffedb9c018 a2=94 a3=83 items=0 ppid=3512 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.531000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit: BPF prog-id=15 op=LOAD May 8 00:31:27.543000 audit[3630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd28fff8 a2=98 a3=ffffcd28ffe8 items=0 ppid=3512 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.543000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 8 00:31:27.543000 audit: BPF prog-id=15 op=UNLOAD May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit: BPF prog-id=16 op=LOAD May 8 00:31:27.543000 audit[3630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd28fea8 a2=74 a3=95 items=0 ppid=3512 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.543000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 8 00:31:27.543000 audit: BPF prog-id=16 op=UNLOAD May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { perfmon } for pid=3630 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit[3630]: AVC avc: denied { bpf } for pid=3630 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.543000 audit: BPF prog-id=17 op=LOAD May 8 00:31:27.543000 audit[3630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd28fed8 a2=40 a3=ffffcd28ff08 items=0 ppid=3512 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.543000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 8 00:31:27.543000 audit: BPF prog-id=17 op=UNLOAD May 8 00:31:27.594680 systemd-networkd[1096]: vxlan.calico: Link UP May 8 00:31:27.594686 systemd-networkd[1096]: vxlan.calico: Gained carrier May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit: BPF prog-id=18 op=LOAD May 8 00:31:27.612000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc8fa1f08 a2=98 a3=ffffc8fa1ef8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.612000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.612000 audit: BPF prog-id=18 op=UNLOAD May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit: BPF prog-id=19 op=LOAD May 8 00:31:27.612000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc8fa1be8 a2=74 a3=95 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.612000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.612000 audit: BPF prog-id=19 op=UNLOAD May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.612000 audit: BPF prog-id=20 op=LOAD May 8 00:31:27.612000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc8fa1c48 a2=94 a3=2 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.612000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit: BPF prog-id=20 op=UNLOAD May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc8fa1c78 a2=28 a3=ffffc8fa1da8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc8fa1ca8 a2=28 a3=ffffc8fa1dd8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc8fa1b58 a2=28 a3=ffffc8fa1c88 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc8fa1cc8 a2=28 a3=ffffc8fa1df8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc8fa1ca8 a2=28 a3=ffffc8fa1dd8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc8fa1c98 a2=28 a3=ffffc8fa1dc8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc8fa1cc8 a2=28 a3=ffffc8fa1df8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc8fa1ca8 a2=28 a3=ffffc8fa1dd8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc8fa1cc8 a2=28 a3=ffffc8fa1df8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc8fa1c98 a2=28 a3=ffffc8fa1dc8 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc8fa1d18 a2=28 a3=ffffc8fa1e58 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit: BPF prog-id=21 op=LOAD May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc8fa1b38 a2=40 a3=ffffc8fa1b68 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit: BPF prog-id=21 op=UNLOAD May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffc8fa1b60 a2=50 a3=0 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffc8fa1b60 a2=50 a3=0 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit: BPF prog-id=22 op=LOAD May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc8fa12c8 a2=94 a3=2 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.613000 audit: BPF prog-id=22 op=UNLOAD May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { perfmon } for pid=3658 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit[3658]: AVC avc: denied { bpf } for pid=3658 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.613000 audit: BPF prog-id=23 op=LOAD May 8 00:31:27.613000 audit[3658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc8fa1458 a2=94 a3=30 items=0 ppid=3512 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.613000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit: BPF prog-id=24 op=LOAD May 8 00:31:27.616000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff18b4ef8 a2=98 a3=fffff18b4ee8 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.616000 audit: BPF prog-id=24 op=UNLOAD May 8 00:31:27.616000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.616000 audit: BPF prog-id=25 op=LOAD May 8 00:31:27.616000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff18b4b88 a2=74 a3=95 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.617000 audit: BPF prog-id=25 op=UNLOAD May 8 00:31:27.617000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.617000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.617000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.617000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.617000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.617000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.617000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.617000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.617000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.617000 audit: BPF prog-id=26 op=LOAD May 8 00:31:27.617000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff18b4be8 a2=94 a3=2 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.617000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.617000 audit: BPF prog-id=26 op=UNLOAD May 8 00:31:27.704000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.704000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.704000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.704000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.704000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.704000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.704000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.704000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.704000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.704000 audit: BPF prog-id=27 op=LOAD May 8 00:31:27.704000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff18b4ba8 a2=40 a3=fffff18b4bd8 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.704000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.705000 audit: BPF prog-id=27 op=UNLOAD May 8 00:31:27.705000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.705000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff18b4cc0 a2=50 a3=0 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.705000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.708452 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:53864.service. May 8 00:31:27.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.15:22-10.0.0.1:53864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff18b4c18 a2=28 a3=fffff18b4d48 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff18b4c48 a2=28 a3=fffff18b4d78 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff18b4af8 a2=28 a3=fffff18b4c28 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff18b4c68 a2=28 a3=fffff18b4d98 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff18b4c48 a2=28 a3=fffff18b4d78 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff18b4c38 a2=28 a3=fffff18b4d68 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff18b4c68 a2=28 a3=fffff18b4d98 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff18b4c48 a2=28 a3=fffff18b4d78 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff18b4c68 a2=28 a3=fffff18b4d98 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff18b4c38 a2=28 a3=fffff18b4d68 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.715000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.715000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff18b4cb8 a2=28 a3=fffff18b4df8 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff18b49f0 a2=50 a3=0 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit: BPF prog-id=28 op=LOAD May 8 00:31:27.716000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff18b49f8 a2=94 a3=5 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.716000 audit: BPF prog-id=28 op=UNLOAD May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff18b4b00 a2=50 a3=0 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff18b4c48 a2=4 a3=3 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { confidentiality } for pid=3661 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:31:27.716000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff18b4c28 a2=94 a3=6 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { confidentiality } for pid=3661 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:31:27.716000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff18b43f8 a2=94 a3=83 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.716000 audit[3661]: AVC avc: denied { confidentiality } for pid=3661 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:31:27.716000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff18b43f8 a2=94 a3=83 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.717000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.717000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff18b5e38 a2=10 a3=fffff18b5f28 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.717000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.717000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.717000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff18b5cf8 a2=10 a3=fffff18b5de8 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.717000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.717000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.717000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff18b5c68 a2=10 a3=fffff18b5de8 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.717000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.717000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:31:27.717000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff18b5c68 a2=10 a3=fffff18b5de8 items=0 ppid=3512 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.717000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:31:27.727000 audit: BPF prog-id=23 op=UNLOAD May 8 00:31:27.757000 audit[3669]: USER_ACCT pid=3669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:27.758824 sshd[3669]: Accepted publickey for core from 10.0.0.1 port 53864 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:27.760000 audit[3669]: CRED_ACQ pid=3669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:27.760000 audit[3669]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea221280 a2=3 a3=1 items=0 ppid=1 pid=3669 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.760000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:27.762292 sshd[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:27.777191 systemd-logind[1297]: New session 10 of user core. May 8 00:31:27.777441 systemd[1]: Started session-10.scope. May 8 00:31:27.783000 audit[3669]: USER_START pid=3669 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:27.783000 audit[3691]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3691 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:27.783000 audit[3691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe838a4e0 a2=0 a3=ffff9d76ffa8 items=0 ppid=3512 pid=3691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.783000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:27.786000 audit[3702]: CRED_ACQ pid=3702 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:27.790000 audit[3693]: NETFILTER_CFG table=filter:98 family=2 entries=39 op=nft_register_chain pid=3693 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:27.790000 audit[3693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=fffffb6b5ca0 a2=0 a3=ffff87ba0fa8 items=0 ppid=3512 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.790000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:27.791000 audit[3695]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3695 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:27.791000 audit[3695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffcec49230 a2=0 a3=ffffaa9c2fa8 items=0 ppid=3512 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.791000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:27.793000 audit[3692]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=3692 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:27.793000 audit[3692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff7b69ac0 a2=0 a3=ffffb0b7bfa8 items=0 ppid=3512 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.793000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:27.927386 sshd[3669]: pam_unix(sshd:session): session closed for user core May 8 00:31:27.928916 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:53876.service. May 8 00:31:27.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.15:22-10.0.0.1:53876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:27.928000 audit[3669]: USER_END pid=3669 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:27.928000 audit[3669]: CRED_DISP pid=3669 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:27.931473 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:53864.service: Deactivated successfully. May 8 00:31:27.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.15:22-10.0.0.1:53864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:27.932504 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:31:27.932909 systemd-logind[1297]: Session 10 logged out. Waiting for processes to exit. May 8 00:31:27.933683 systemd-logind[1297]: Removed session 10. May 8 00:31:27.970000 audit[3716]: USER_ACCT pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:27.972477 sshd[3716]: Accepted publickey for core from 10.0.0.1 port 53876 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:27.971000 audit[3716]: CRED_ACQ pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:27.972000 audit[3716]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec7c9570 a2=3 a3=1 items=0 ppid=1 pid=3716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:27.972000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:27.973625 sshd[3716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:27.977694 systemd-logind[1297]: New session 11 of user core. May 8 00:31:27.978169 systemd[1]: Started session-11.scope. May 8 00:31:27.981000 audit[3716]: USER_START pid=3716 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:27.982000 audit[3721]: CRED_ACQ pid=3721 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:28.136733 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:53882.service. May 8 00:31:28.137237 sshd[3716]: pam_unix(sshd:session): session closed for user core May 8 00:31:28.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.15:22-10.0.0.1:53882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:28.137000 audit[3716]: USER_END pid=3716 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:28.137000 audit[3716]: CRED_DISP pid=3716 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:28.140418 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:53876.service: Deactivated successfully. May 8 00:31:28.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.15:22-10.0.0.1:53876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:28.144853 systemd-logind[1297]: Session 11 logged out. Waiting for processes to exit. May 8 00:31:28.144941 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:31:28.145929 systemd-logind[1297]: Removed session 11. May 8 00:31:28.183000 audit[3730]: USER_ACCT pid=3730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:28.184644 sshd[3730]: Accepted publickey for core from 10.0.0.1 port 53882 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:28.184000 audit[3730]: CRED_ACQ pid=3730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:28.184000 audit[3730]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff09d2480 a2=3 a3=1 items=0 ppid=1 pid=3730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:28.184000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:28.185752 sshd[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:28.188978 systemd-logind[1297]: New session 12 of user core. May 8 00:31:28.189871 systemd[1]: Started session-12.scope. May 8 00:31:28.202000 audit[3730]: USER_START pid=3730 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:28.205000 audit[3735]: CRED_ACQ pid=3735 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:28.331497 sshd[3730]: pam_unix(sshd:session): session closed for user core May 8 00:31:28.337000 audit[3730]: USER_END pid=3730 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:28.337000 audit[3730]: CRED_DISP pid=3730 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:28.341787 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:53882.service: Deactivated successfully. May 8 00:31:28.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.15:22-10.0.0.1:53882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:28.342784 systemd-logind[1297]: Session 12 logged out. Waiting for processes to exit. May 8 00:31:28.342858 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:31:28.343582 systemd-logind[1297]: Removed session 12. May 8 00:31:28.814437 systemd-networkd[1096]: vxlan.calico: Gained IPv6LL May 8 00:31:31.072805 env[1315]: time="2025-05-08T00:31:31.072756873Z" level=info msg="StopPodSandbox for \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\"" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.379 [INFO][3766] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.379 [INFO][3766] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" iface="eth0" netns="/var/run/netns/cni-3db55317-c64e-99aa-158b-d19462977d31" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.380 [INFO][3766] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" iface="eth0" netns="/var/run/netns/cni-3db55317-c64e-99aa-158b-d19462977d31" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.386 [INFO][3766] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" iface="eth0" netns="/var/run/netns/cni-3db55317-c64e-99aa-158b-d19462977d31" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.386 [INFO][3766] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.386 [INFO][3766] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.566 [INFO][3774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" HandleID="k8s-pod-network.519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.566 [INFO][3774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.566 [INFO][3774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.576 [WARNING][3774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" HandleID="k8s-pod-network.519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.576 [INFO][3774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" HandleID="k8s-pod-network.519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.578 [INFO][3774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:31.593838 env[1315]: 2025-05-08 00:31:31.591 [INFO][3766] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:31.595967 env[1315]: time="2025-05-08T00:31:31.595924616Z" level=info msg="TearDown network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\" successfully" May 8 00:31:31.596046 env[1315]: time="2025-05-08T00:31:31.595967217Z" level=info msg="StopPodSandbox for \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\" returns successfully" May 8 00:31:31.596355 systemd[1]: run-netns-cni\x2d3db55317\x2dc64e\x2d99aa\x2d158b\x2dd19462977d31.mount: Deactivated successfully. May 8 00:31:31.596832 env[1315]: time="2025-05-08T00:31:31.596800594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756d5447f-sn9fj,Uid:bce8cf8f-fe61-4c34-96ae-8a08509a41ec,Namespace:calico-system,Attempt:1,}" May 8 00:31:31.757217 systemd-networkd[1096]: cali6ce38bdc3c3: Link UP May 8 00:31:31.759533 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:31:31.759576 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6ce38bdc3c3: link becomes ready May 8 00:31:31.759205 systemd-networkd[1096]: cali6ce38bdc3c3: Gained carrier May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.645 [INFO][3782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0 calico-kube-controllers-756d5447f- calico-system bce8cf8f-fe61-4c34-96ae-8a08509a41ec 854 0 2025-05-08 00:31:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:756d5447f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-756d5447f-sn9fj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6ce38bdc3c3 [] []}} ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Namespace="calico-system" Pod="calico-kube-controllers-756d5447f-sn9fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.645 [INFO][3782] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Namespace="calico-system" Pod="calico-kube-controllers-756d5447f-sn9fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.686 [INFO][3797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" HandleID="k8s-pod-network.de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.704 [INFO][3797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" HandleID="k8s-pod-network.de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-756d5447f-sn9fj", "timestamp":"2025-05-08 00:31:31.685995606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.705 [INFO][3797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.705 [INFO][3797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.705 [INFO][3797] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.707 [INFO][3797] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" host="localhost" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.723 [INFO][3797] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.730 [INFO][3797] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.733 [INFO][3797] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.736 [INFO][3797] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.736 [INFO][3797] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" host="localhost" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.738 [INFO][3797] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4 May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.742 [INFO][3797] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" host="localhost" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.752 [INFO][3797] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" host="localhost" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.752 [INFO][3797] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" host="localhost" May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.752 [INFO][3797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:31.775136 env[1315]: 2025-05-08 00:31:31.752 [INFO][3797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" HandleID="k8s-pod-network.de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.775776 env[1315]: 2025-05-08 00:31:31.754 [INFO][3782] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Namespace="calico-system" Pod="calico-kube-controllers-756d5447f-sn9fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0", GenerateName:"calico-kube-controllers-756d5447f-", Namespace:"calico-system", SelfLink:"", UID:"bce8cf8f-fe61-4c34-96ae-8a08509a41ec", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756d5447f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-756d5447f-sn9fj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ce38bdc3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:31.775776 env[1315]: 2025-05-08 00:31:31.754 [INFO][3782] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Namespace="calico-system" Pod="calico-kube-controllers-756d5447f-sn9fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.775776 env[1315]: 2025-05-08 00:31:31.754 [INFO][3782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ce38bdc3c3 ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Namespace="calico-system" Pod="calico-kube-controllers-756d5447f-sn9fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.775776 env[1315]: 2025-05-08 00:31:31.759 [INFO][3782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Namespace="calico-system" Pod="calico-kube-controllers-756d5447f-sn9fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.775776 env[1315]: 2025-05-08 00:31:31.760 [INFO][3782] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Namespace="calico-system" Pod="calico-kube-controllers-756d5447f-sn9fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0", GenerateName:"calico-kube-controllers-756d5447f-", Namespace:"calico-system", SelfLink:"", UID:"bce8cf8f-fe61-4c34-96ae-8a08509a41ec", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756d5447f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4", Pod:"calico-kube-controllers-756d5447f-sn9fj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ce38bdc3c3", MAC:"d6:07:59:77:b9:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:31.775776 env[1315]: 2025-05-08 00:31:31.773 [INFO][3782] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4" Namespace="calico-system" Pod="calico-kube-controllers-756d5447f-sn9fj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:31.783000 audit[3823]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:31.783000 audit[3823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffe85eb620 a2=0 a3=ffffad619fa8 items=0 ppid=3512 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:31.783000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:31.789862 env[1315]: time="2025-05-08T00:31:31.789794122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:31.790778 env[1315]: time="2025-05-08T00:31:31.789834402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:31.790778 env[1315]: time="2025-05-08T00:31:31.789852763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:31.790778 env[1315]: time="2025-05-08T00:31:31.790039687Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4 pid=3832 runtime=io.containerd.runc.v2 May 8 00:31:31.840746 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:31:31.860382 env[1315]: time="2025-05-08T00:31:31.860267065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756d5447f-sn9fj,Uid:bce8cf8f-fe61-4c34-96ae-8a08509a41ec,Namespace:calico-system,Attempt:1,} returns sandbox id \"de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4\"" May 8 00:31:31.863668 env[1315]: time="2025-05-08T00:31:31.863637695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:31:33.073624 env[1315]: time="2025-05-08T00:31:33.073579985Z" level=info msg="StopPodSandbox for \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\"" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.152 [INFO][3888] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.152 [INFO][3888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" iface="eth0" netns="/var/run/netns/cni-e1f9c8eb-9be1-f2fb-db39-5089139de4ce" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.153 [INFO][3888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" iface="eth0" netns="/var/run/netns/cni-e1f9c8eb-9be1-f2fb-db39-5089139de4ce" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.153 [INFO][3888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" iface="eth0" netns="/var/run/netns/cni-e1f9c8eb-9be1-f2fb-db39-5089139de4ce" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.153 [INFO][3888] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.153 [INFO][3888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.197 [INFO][3897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" HandleID="k8s-pod-network.3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.198 [INFO][3897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.198 [INFO][3897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.206 [WARNING][3897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" HandleID="k8s-pod-network.3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.207 [INFO][3897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" HandleID="k8s-pod-network.3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.208 [INFO][3897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:33.211907 env[1315]: 2025-05-08 00:31:33.210 [INFO][3888] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:33.214255 env[1315]: time="2025-05-08T00:31:33.214192302Z" level=info msg="TearDown network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\" successfully" May 8 00:31:33.214255 env[1315]: time="2025-05-08T00:31:33.214231623Z" level=info msg="StopPodSandbox for \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\" returns successfully" May 8 00:31:33.214330 systemd[1]: run-netns-cni\x2de1f9c8eb\x2d9be1\x2df2fb\x2ddb39\x2d5089139de4ce.mount: Deactivated successfully. May 8 00:31:33.214960 env[1315]: time="2025-05-08T00:31:33.214916597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76g2m,Uid:f2615509-fc42-4214-b9b8-44dfb15979ff,Namespace:calico-system,Attempt:1,}" May 8 00:31:33.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.15:22-10.0.0.1:57320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:33.336112 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:57320.service. May 8 00:31:33.339853 kernel: kauditd_printk_skb: 506 callbacks suppressed May 8 00:31:33.339979 kernel: audit: type=1130 audit(1746664293.334:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.15:22-10.0.0.1:57320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:33.369716 systemd-networkd[1096]: cali63a9e7d6c72: Link UP May 8 00:31:33.371458 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:31:33.371634 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali63a9e7d6c72: link becomes ready May 8 00:31:33.371610 systemd-networkd[1096]: cali63a9e7d6c72: Gained carrier May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.275 [INFO][3905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--76g2m-eth0 csi-node-driver- calico-system f2615509-fc42-4214-b9b8-44dfb15979ff 876 0 2025-05-08 00:31:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-76g2m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali63a9e7d6c72 [] []}} ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Namespace="calico-system" Pod="csi-node-driver-76g2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--76g2m-" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.276 [INFO][3905] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Namespace="calico-system" Pod="csi-node-driver-76g2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.317 [INFO][3919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" HandleID="k8s-pod-network.6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.329 [INFO][3919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" HandleID="k8s-pod-network.6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000354400), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-76g2m", "timestamp":"2025-05-08 00:31:33.317360914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.329 [INFO][3919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.329 [INFO][3919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.329 [INFO][3919] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.331 [INFO][3919] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" host="localhost" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.338 [INFO][3919] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.345 [INFO][3919] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.346 [INFO][3919] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.353 [INFO][3919] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.353 [INFO][3919] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" host="localhost" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.355 [INFO][3919] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.359 [INFO][3919] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" host="localhost" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.364 [INFO][3919] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" host="localhost" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.364 [INFO][3919] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" host="localhost" May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.364 [INFO][3919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:33.384500 env[1315]: 2025-05-08 00:31:33.364 [INFO][3919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" HandleID="k8s-pod-network.6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.385095 env[1315]: 2025-05-08 00:31:33.367 [INFO][3905] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Namespace="calico-system" Pod="csi-node-driver-76g2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--76g2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--76g2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2615509-fc42-4214-b9b8-44dfb15979ff", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-76g2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63a9e7d6c72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:33.385095 env[1315]: 2025-05-08 00:31:33.367 [INFO][3905] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Namespace="calico-system" Pod="csi-node-driver-76g2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.385095 env[1315]: 2025-05-08 00:31:33.367 [INFO][3905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63a9e7d6c72 ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Namespace="calico-system" Pod="csi-node-driver-76g2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.385095 env[1315]: 2025-05-08 00:31:33.371 [INFO][3905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Namespace="calico-system" Pod="csi-node-driver-76g2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.385095 env[1315]: 2025-05-08 00:31:33.373 [INFO][3905] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Namespace="calico-system" Pod="csi-node-driver-76g2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--76g2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--76g2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2615509-fc42-4214-b9b8-44dfb15979ff", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce", Pod:"csi-node-driver-76g2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63a9e7d6c72", MAC:"12:dc:94:96:d5:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:33.385095 env[1315]: 2025-05-08 00:31:33.381 [INFO][3905] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce" Namespace="calico-system" Pod="csi-node-driver-76g2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:33.392000 audit[3943]: NETFILTER_CFG table=filter:102 family=2 entries=34 op=nft_register_chain pid=3943 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:33.392000 audit[3943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18640 a0=3 a1=ffffe9bb0a90 a2=0 a3=ffffa4261fa8 items=0 ppid=3512 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:33.398679 kernel: audit: type=1325 audit(1746664293.392:439): table=filter:102 family=2 entries=34 op=nft_register_chain pid=3943 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:33.398759 kernel: audit: type=1300 audit(1746664293.392:439): arch=c00000b7 syscall=211 success=yes exit=18640 a0=3 a1=ffffe9bb0a90 a2=0 a3=ffffa4261fa8 items=0 ppid=3512 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:33.398785 kernel: audit: type=1327 audit(1746664293.392:439): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:33.392000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:33.405108 env[1315]: time="2025-05-08T00:31:33.405053539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:33.405311 env[1315]: time="2025-05-08T00:31:33.405262463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:33.405480 env[1315]: time="2025-05-08T00:31:33.405453867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:33.405701 env[1315]: time="2025-05-08T00:31:33.405671671Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce pid=3951 runtime=io.containerd.runc.v2 May 8 00:31:33.410000 audit[3926]: USER_ACCT pid=3926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:33.415239 kernel: audit: type=1101 audit(1746664293.410:440): pid=3926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:33.415336 sshd[3926]: Accepted publickey for core from 10.0.0.1 port 57320 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:33.414000 audit[3926]: CRED_ACQ pid=3926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:33.418454 kernel: audit: type=1103 audit(1746664293.414:441): pid=3926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:33.418525 kernel: audit: type=1006 audit(1746664293.417:442): pid=3926 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 May 8 00:31:33.417000 audit[3926]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc42fd410 a2=3 a3=1 items=0 ppid=1 pid=3926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:33.422015 kernel: audit: type=1300 audit(1746664293.417:442): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc42fd410 a2=3 a3=1 items=0 ppid=1 pid=3926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:33.422108 kernel: audit: type=1327 audit(1746664293.417:442): proctitle=737368643A20636F7265205B707269765D May 8 00:31:33.417000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:33.423384 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:33.428315 systemd[1]: Started session-13.scope. May 8 00:31:33.428724 systemd-logind[1297]: New session 13 of user core. May 8 00:31:33.438000 audit[3926]: USER_START pid=3926 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:33.443000 audit[3978]: CRED_ACQ pid=3978 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:33.444316 kernel: audit: type=1105 audit(1746664293.438:443): pid=3926 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:33.447238 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:31:33.459652 env[1315]: time="2025-05-08T00:31:33.459612704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76g2m,Uid:f2615509-fc42-4214-b9b8-44dfb15979ff,Namespace:calico-system,Attempt:1,} returns sandbox id \"6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce\"" May 8 00:31:33.607432 sshd[3926]: pam_unix(sshd:session): session closed for user core May 8 00:31:33.607000 audit[3926]: USER_END pid=3926 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:33.608000 audit[3926]: CRED_DISP pid=3926 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:33.610828 systemd-logind[1297]: Session 13 logged out. Waiting for processes to exit. May 8 00:31:33.611752 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:57320.service: Deactivated successfully. May 8 00:31:33.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.15:22-10.0.0.1:57320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:33.612575 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:31:33.612984 systemd-logind[1297]: Removed session 13. May 8 00:31:33.614613 systemd-networkd[1096]: cali6ce38bdc3c3: Gained IPv6LL May 8 00:31:33.951579 env[1315]: time="2025-05-08T00:31:33.951521889Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:33.953083 env[1315]: time="2025-05-08T00:31:33.953004078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:33.954574 env[1315]: time="2025-05-08T00:31:33.954523388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:33.956255 env[1315]: time="2025-05-08T00:31:33.956220862Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:33.956759 env[1315]: time="2025-05-08T00:31:33.956725992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 8 00:31:33.960556 env[1315]: time="2025-05-08T00:31:33.958488907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:31:33.967331 env[1315]: time="2025-05-08T00:31:33.966802713Z" level=info msg="CreateContainer within sandbox \"de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:31:33.977542 env[1315]: time="2025-05-08T00:31:33.977504166Z" level=info msg="CreateContainer within sandbox \"de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a13e506958b4d76fe2a90ca6344b506f8c3f4e77432bab5b3869af00282a2960\"" May 8 00:31:33.978392 env[1315]: time="2025-05-08T00:31:33.978365463Z" level=info msg="StartContainer for \"a13e506958b4d76fe2a90ca6344b506f8c3f4e77432bab5b3869af00282a2960\"" May 8 00:31:34.050432 env[1315]: time="2025-05-08T00:31:34.050380476Z" level=info msg="StartContainer for \"a13e506958b4d76fe2a90ca6344b506f8c3f4e77432bab5b3869af00282a2960\" returns successfully" May 8 00:31:34.073356 env[1315]: time="2025-05-08T00:31:34.073307123Z" level=info msg="StopPodSandbox for \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\"" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.121 [INFO][4054] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.122 [INFO][4054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" iface="eth0" netns="/var/run/netns/cni-3354db03-fa97-4c35-b4c3-d29038743321" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.122 [INFO][4054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" iface="eth0" netns="/var/run/netns/cni-3354db03-fa97-4c35-b4c3-d29038743321" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.122 [INFO][4054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" iface="eth0" netns="/var/run/netns/cni-3354db03-fa97-4c35-b4c3-d29038743321" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.122 [INFO][4054] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.122 [INFO][4054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.155 [INFO][4064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" HandleID="k8s-pod-network.a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.157 [INFO][4064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.157 [INFO][4064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.221 [WARNING][4064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" HandleID="k8s-pod-network.a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.221 [INFO][4064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" HandleID="k8s-pod-network.a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.224 [INFO][4064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:34.235125 env[1315]: 2025-05-08 00:31:34.230 [INFO][4054] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:34.239112 env[1315]: time="2025-05-08T00:31:34.239028553Z" level=info msg="TearDown network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\" successfully" May 8 00:31:34.239210 env[1315]: time="2025-05-08T00:31:34.239114835Z" level=info msg="StopPodSandbox for \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\" returns successfully" May 8 00:31:34.239844 env[1315]: time="2025-05-08T00:31:34.239745007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bcdbc946-6jhfh,Uid:244b8596-ee88-4b1a-879a-7c87e073db5b,Namespace:calico-apiserver,Attempt:1,}" May 8 00:31:34.291881 kubelet[2224]: I0508 00:31:34.290401 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-756d5447f-sn9fj" podStartSLOduration=22.195592423 podStartE2EDuration="24.290377194s" podCreationTimestamp="2025-05-08 00:31:10 +0000 UTC" firstStartedPulling="2025-05-08 00:31:31.863107484 +0000 UTC m=+42.879786818" lastFinishedPulling="2025-05-08 00:31:33.957892255 +0000 UTC m=+44.974571589" observedRunningTime="2025-05-08 00:31:34.225214924 +0000 UTC m=+45.241894258" watchObservedRunningTime="2025-05-08 00:31:34.290377194 +0000 UTC m=+45.307056529" May 8 00:31:34.359335 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibcfa9b3d559: link becomes ready May 8 00:31:34.356447 systemd-networkd[1096]: calibcfa9b3d559: Link UP May 8 00:31:34.357324 systemd-networkd[1096]: calibcfa9b3d559: Gained carrier May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.291 [INFO][4086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0 calico-apiserver-79bcdbc946- calico-apiserver 244b8596-ee88-4b1a-879a-7c87e073db5b 888 0 2025-05-08 00:31:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79bcdbc946 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79bcdbc946-6jhfh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibcfa9b3d559 [] []}} ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-6jhfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.291 [INFO][4086] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-6jhfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.317 [INFO][4111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" HandleID="k8s-pod-network.23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.328 [INFO][4111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" HandleID="k8s-pod-network.23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000373b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79bcdbc946-6jhfh", "timestamp":"2025-05-08 00:31:34.317810929 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.328 [INFO][4111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.328 [INFO][4111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.328 [INFO][4111] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.329 [INFO][4111] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" host="localhost" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.333 [INFO][4111] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.336 [INFO][4111] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.338 [INFO][4111] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.340 [INFO][4111] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.340 [INFO][4111] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" host="localhost" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.342 [INFO][4111] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754 May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.346 [INFO][4111] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" host="localhost" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.351 [INFO][4111] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" host="localhost" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.351 [INFO][4111] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" host="localhost" May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.351 [INFO][4111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:34.370894 env[1315]: 2025-05-08 00:31:34.351 [INFO][4111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" HandleID="k8s-pod-network.23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.371836 env[1315]: 2025-05-08 00:31:34.354 [INFO][4086] cni-plugin/k8s.go 386: Populated endpoint ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-6jhfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0", GenerateName:"calico-apiserver-79bcdbc946-", Namespace:"calico-apiserver", SelfLink:"", UID:"244b8596-ee88-4b1a-879a-7c87e073db5b", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bcdbc946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79bcdbc946-6jhfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibcfa9b3d559", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:34.371836 env[1315]: 2025-05-08 00:31:34.354 [INFO][4086] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-6jhfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.371836 env[1315]: 2025-05-08 00:31:34.354 [INFO][4086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibcfa9b3d559 ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-6jhfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.371836 env[1315]: 2025-05-08 00:31:34.357 [INFO][4086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-6jhfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.371836 env[1315]: 2025-05-08 00:31:34.357 [INFO][4086] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-6jhfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0", GenerateName:"calico-apiserver-79bcdbc946-", Namespace:"calico-apiserver", SelfLink:"", UID:"244b8596-ee88-4b1a-879a-7c87e073db5b", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bcdbc946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754", Pod:"calico-apiserver-79bcdbc946-6jhfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibcfa9b3d559", MAC:"32:48:91:ed:46:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:34.371836 env[1315]: 2025-05-08 00:31:34.367 [INFO][4086] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-6jhfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:34.376000 audit[4133]: NETFILTER_CFG table=filter:103 family=2 entries=54 op=nft_register_chain pid=4133 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:34.376000 audit[4133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28720 a0=3 a1=ffffea24d530 a2=0 a3=ffff8f832fa8 items=0 ppid=3512 pid=4133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:34.376000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:34.381723 env[1315]: time="2025-05-08T00:31:34.381642294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:34.381723 env[1315]: time="2025-05-08T00:31:34.381699295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:34.381863 env[1315]: time="2025-05-08T00:31:34.381709695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:34.382010 env[1315]: time="2025-05-08T00:31:34.381974580Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754 pid=4140 runtime=io.containerd.runc.v2 May 8 00:31:34.417428 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:31:34.433257 env[1315]: time="2025-05-08T00:31:34.433201499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bcdbc946-6jhfh,Uid:244b8596-ee88-4b1a-879a-7c87e073db5b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754\"" May 8 00:31:34.598425 systemd[1]: run-netns-cni\x2d3354db03\x2dfa97\x2d4c35\x2db4c3\x2dd29038743321.mount: Deactivated successfully. May 8 00:31:34.638720 systemd-networkd[1096]: cali63a9e7d6c72: Gained IPv6LL May 8 00:31:35.073906 env[1315]: time="2025-05-08T00:31:35.073214990Z" level=info msg="StopPodSandbox for \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\"" May 8 00:31:35.073906 env[1315]: time="2025-05-08T00:31:35.073705839Z" level=info msg="StopPodSandbox for \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\"" May 8 00:31:35.074227 env[1315]: time="2025-05-08T00:31:35.074198648Z" level=info msg="StopPodSandbox for \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\"" May 8 00:31:35.176486 env[1315]: time="2025-05-08T00:31:35.176440764Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:35.178007 env[1315]: time="2025-05-08T00:31:35.177972873Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:35.179629 env[1315]: time="2025-05-08T00:31:35.179593304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:35.180942 env[1315]: time="2025-05-08T00:31:35.180903009Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:35.181360 env[1315]: time="2025-05-08T00:31:35.181322897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 8 00:31:35.182527 env[1315]: time="2025-05-08T00:31:35.182318796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:31:35.183560 env[1315]: time="2025-05-08T00:31:35.183533859Z" level=info msg="CreateContainer within sandbox \"6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:31:35.201663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119083175.mount: Deactivated successfully. May 8 00:31:35.210961 env[1315]: time="2025-05-08T00:31:35.210914623Z" level=info msg="CreateContainer within sandbox \"6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"50db6a4ca83e69adb92f904963304a0dda39de564ae44f719388a5642c1e4b2f\"" May 8 00:31:35.213492 env[1315]: time="2025-05-08T00:31:35.213457672Z" level=info msg="StartContainer for \"50db6a4ca83e69adb92f904963304a0dda39de564ae44f719388a5642c1e4b2f\"" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.146 [INFO][4222] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.146 [INFO][4222] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" iface="eth0" netns="/var/run/netns/cni-d1825e18-8803-01ff-93f5-57e7c8669306" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.146 [INFO][4222] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" iface="eth0" netns="/var/run/netns/cni-d1825e18-8803-01ff-93f5-57e7c8669306" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.146 [INFO][4222] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" iface="eth0" netns="/var/run/netns/cni-d1825e18-8803-01ff-93f5-57e7c8669306" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.146 [INFO][4222] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.146 [INFO][4222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.188 [INFO][4249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" HandleID="k8s-pod-network.ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.188 [INFO][4249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.189 [INFO][4249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.201 [WARNING][4249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" HandleID="k8s-pod-network.ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.202 [INFO][4249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" HandleID="k8s-pod-network.ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.204 [INFO][4249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:35.213588 env[1315]: 2025-05-08 00:31:35.210 [INFO][4222] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:35.213893 env[1315]: time="2025-05-08T00:31:35.213704196Z" level=info msg="TearDown network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\" successfully" May 8 00:31:35.213893 env[1315]: time="2025-05-08T00:31:35.213726277Z" level=info msg="StopPodSandbox for \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\" returns successfully" May 8 00:31:35.214298 kubelet[2224]: E0508 00:31:35.214113 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:35.216593 env[1315]: time="2025-05-08T00:31:35.216560171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nfk8b,Uid:6037af79-2659-4a47-9819-2be36a07e900,Namespace:kube-system,Attempt:1,}" May 8 00:31:35.220025 systemd[1]: run-netns-cni\x2dd1825e18\x2d8803\x2d01ff\x2d93f5\x2d57e7c8669306.mount: Deactivated successfully. May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.167 [INFO][4221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.167 [INFO][4221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" iface="eth0" netns="/var/run/netns/cni-edbf391c-d716-6d8c-744a-92f0ac6d0db4" May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.167 [INFO][4221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" iface="eth0" netns="/var/run/netns/cni-edbf391c-d716-6d8c-744a-92f0ac6d0db4" May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.168 [INFO][4221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" iface="eth0" netns="/var/run/netns/cni-edbf391c-d716-6d8c-744a-92f0ac6d0db4" May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.168 [INFO][4221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.168 [INFO][4221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.201 [INFO][4262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" HandleID="k8s-pod-network.0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.202 [INFO][4262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.204 [INFO][4262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.213 [WARNING][4262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" HandleID="k8s-pod-network.0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.214 [INFO][4262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" HandleID="k8s-pod-network.0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.216 [INFO][4262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:35.223033 env[1315]: 2025-05-08 00:31:35.218 [INFO][4221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:35.224653 env[1315]: time="2025-05-08T00:31:35.224625885Z" level=info msg="TearDown network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\" successfully" May 8 00:31:35.224760 env[1315]: time="2025-05-08T00:31:35.224743367Z" level=info msg="StopPodSandbox for \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\" returns successfully" May 8 00:31:35.225176 kubelet[2224]: E0508 00:31:35.225133 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:35.226000 env[1315]: time="2025-05-08T00:31:35.225969431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5zxd,Uid:151f3a99-6667-4e9d-bb95-deb81c9e6f7a,Namespace:kube-system,Attempt:1,}" May 8 00:31:35.226614 systemd[1]: run-netns-cni\x2dedbf391c\x2dd716\x2d6d8c\x2d744a\x2d92f0ac6d0db4.mount: Deactivated successfully. May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.161 [INFO][4236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.161 [INFO][4236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" iface="eth0" netns="/var/run/netns/cni-29fc6d3d-c1a2-a573-1e04-ec3ec827878b" May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.162 [INFO][4236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" iface="eth0" netns="/var/run/netns/cni-29fc6d3d-c1a2-a573-1e04-ec3ec827878b" May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.162 [INFO][4236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" iface="eth0" netns="/var/run/netns/cni-29fc6d3d-c1a2-a573-1e04-ec3ec827878b" May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.162 [INFO][4236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.162 [INFO][4236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.208 [INFO][4256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" HandleID="k8s-pod-network.f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.208 [INFO][4256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.216 [INFO][4256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.228 [WARNING][4256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" HandleID="k8s-pod-network.f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.228 [INFO][4256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" HandleID="k8s-pod-network.f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.231 [INFO][4256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:35.238299 env[1315]: 2025-05-08 00:31:35.234 [INFO][4236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:35.238299 env[1315]: time="2025-05-08T00:31:35.237416970Z" level=info msg="TearDown network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\" successfully" May 8 00:31:35.238299 env[1315]: time="2025-05-08T00:31:35.237443970Z" level=info msg="StopPodSandbox for \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\" returns successfully" May 8 00:31:35.239230 env[1315]: time="2025-05-08T00:31:35.239197604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bcdbc946-bfcrq,Uid:40e1a3ea-b656-44f2-891d-5f464556c5ae,Namespace:calico-apiserver,Attempt:1,}" May 8 00:31:35.305613 env[1315]: time="2025-05-08T00:31:35.305570833Z" level=info msg="StartContainer for \"50db6a4ca83e69adb92f904963304a0dda39de564ae44f719388a5642c1e4b2f\" returns successfully" May 8 00:31:35.383831 systemd-networkd[1096]: cali7c8e6d14390: Link UP May 8 00:31:35.384295 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:31:35.384353 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7c8e6d14390: link becomes ready May 8 00:31:35.384549 systemd-networkd[1096]: cali7c8e6d14390: Gained carrier May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.289 [INFO][4291] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0 coredns-7db6d8ff4d- kube-system 6037af79-2659-4a47-9819-2be36a07e900 904 0 2025-05-08 00:31:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-nfk8b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c8e6d14390 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nfk8b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nfk8b-" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.289 [INFO][4291] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nfk8b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.329 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" HandleID="k8s-pod-network.f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.340 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" HandleID="k8s-pod-network.f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-nfk8b", "timestamp":"2025-05-08 00:31:35.329244486 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.340 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.340 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.340 [INFO][4352] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.344 [INFO][4352] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" host="localhost" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.352 [INFO][4352] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.360 [INFO][4352] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.362 [INFO][4352] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.365 [INFO][4352] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.365 [INFO][4352] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" host="localhost" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.366 [INFO][4352] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.370 [INFO][4352] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" host="localhost" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.377 [INFO][4352] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" host="localhost" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.377 [INFO][4352] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" host="localhost" May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.377 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:35.402677 env[1315]: 2025-05-08 00:31:35.377 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" HandleID="k8s-pod-network.f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.403237 env[1315]: 2025-05-08 00:31:35.380 [INFO][4291] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nfk8b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6037af79-2659-4a47-9819-2be36a07e900", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-nfk8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c8e6d14390", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:35.403237 env[1315]: 2025-05-08 00:31:35.380 [INFO][4291] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nfk8b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.403237 env[1315]: 2025-05-08 00:31:35.380 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c8e6d14390 ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nfk8b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.403237 env[1315]: 2025-05-08 00:31:35.385 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nfk8b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.403237 env[1315]: 2025-05-08 00:31:35.389 [INFO][4291] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nfk8b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6037af79-2659-4a47-9819-2be36a07e900", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a", Pod:"coredns-7db6d8ff4d-nfk8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c8e6d14390", MAC:"2a:c6:92:54:53:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:35.403237 env[1315]: 2025-05-08 00:31:35.400 [INFO][4291] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nfk8b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:35.409000 audit[4391]: NETFILTER_CFG table=filter:104 family=2 entries=42 op=nft_register_chain pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:35.409000 audit[4391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21508 a0=3 a1=fffffd9ec1b0 a2=0 a3=ffffb42a5fa8 items=0 ppid=3512 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:35.409000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:35.418418 env[1315]: time="2025-05-08T00:31:35.418332390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:35.418518 env[1315]: time="2025-05-08T00:31:35.418422071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:35.418518 env[1315]: time="2025-05-08T00:31:35.418468352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:35.418744 env[1315]: time="2025-05-08T00:31:35.418712197Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a pid=4403 runtime=io.containerd.runc.v2 May 8 00:31:35.421301 systemd-networkd[1096]: cali533013c8a6b: Link UP May 8 00:31:35.423746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali533013c8a6b: link becomes ready May 8 00:31:35.423265 systemd-networkd[1096]: cali533013c8a6b: Gained carrier May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.293 [INFO][4301] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0 coredns-7db6d8ff4d- kube-system 151f3a99-6667-4e9d-bb95-deb81c9e6f7a 906 0 2025-05-08 00:31:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-j5zxd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali533013c8a6b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5zxd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5zxd-" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.293 [INFO][4301] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5zxd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.345 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" HandleID="k8s-pod-network.8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.357 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" HandleID="k8s-pod-network.8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3200), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-j5zxd", "timestamp":"2025-05-08 00:31:35.345116389 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.357 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.377 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.377 [INFO][4359] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.379 [INFO][4359] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" host="localhost" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.390 [INFO][4359] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.394 [INFO][4359] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.400 [INFO][4359] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.402 [INFO][4359] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.402 [INFO][4359] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" host="localhost" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.405 [INFO][4359] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.408 [INFO][4359] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" host="localhost" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.415 [INFO][4359] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" host="localhost" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.415 [INFO][4359] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" host="localhost" May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.415 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:35.437843 env[1315]: 2025-05-08 00:31:35.415 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" HandleID="k8s-pod-network.8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.439349 env[1315]: 2025-05-08 00:31:35.418 [INFO][4301] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5zxd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"151f3a99-6667-4e9d-bb95-deb81c9e6f7a", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-j5zxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali533013c8a6b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:35.439349 env[1315]: 2025-05-08 00:31:35.418 [INFO][4301] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5zxd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.439349 env[1315]: 2025-05-08 00:31:35.418 [INFO][4301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali533013c8a6b ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5zxd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.439349 env[1315]: 2025-05-08 00:31:35.423 [INFO][4301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5zxd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.439349 env[1315]: 2025-05-08 00:31:35.423 [INFO][4301] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5zxd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"151f3a99-6667-4e9d-bb95-deb81c9e6f7a", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb", Pod:"coredns-7db6d8ff4d-j5zxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali533013c8a6b", MAC:"4e:f5:ab:ba:96:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:35.439349 env[1315]: 2025-05-08 00:31:35.432 [INFO][4301] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5zxd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:35.450000 audit[4435]: NETFILTER_CFG table=filter:105 family=2 entries=38 op=nft_register_chain pid=4435 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:35.450000 audit[4435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19392 a0=3 a1=ffffdc2e3cd0 a2=0 a3=ffffb86eefa8 items=0 ppid=3512 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:35.450000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:35.457128 systemd-networkd[1096]: cali2529c907700: Link UP May 8 00:31:35.457324 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2529c907700: link becomes ready May 8 00:31:35.457326 systemd-networkd[1096]: cali2529c907700: Gained carrier May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.312 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0 calico-apiserver-79bcdbc946- calico-apiserver 40e1a3ea-b656-44f2-891d-5f464556c5ae 905 0 2025-05-08 00:31:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79bcdbc946 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79bcdbc946-bfcrq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2529c907700 [] []}} ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-bfcrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.312 [INFO][4325] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-bfcrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.354 [INFO][4370] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" HandleID="k8s-pod-network.993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.365 [INFO][4370] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" HandleID="k8s-pod-network.993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79bcdbc946-bfcrq", "timestamp":"2025-05-08 00:31:35.354352006 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.365 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.415 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.416 [INFO][4370] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.418 [INFO][4370] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" host="localhost" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.424 [INFO][4370] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.428 [INFO][4370] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.430 [INFO][4370] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.433 [INFO][4370] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.433 [INFO][4370] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" host="localhost" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.435 [INFO][4370] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318 May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.439 [INFO][4370] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" host="localhost" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.444 [INFO][4370] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" host="localhost" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.444 [INFO][4370] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" host="localhost" May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.444 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:35.477310 env[1315]: 2025-05-08 00:31:35.444 [INFO][4370] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" HandleID="k8s-pod-network.993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.478356 env[1315]: 2025-05-08 00:31:35.454 [INFO][4325] cni-plugin/k8s.go 386: Populated endpoint ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-bfcrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0", GenerateName:"calico-apiserver-79bcdbc946-", Namespace:"calico-apiserver", SelfLink:"", UID:"40e1a3ea-b656-44f2-891d-5f464556c5ae", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bcdbc946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79bcdbc946-bfcrq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2529c907700", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:35.478356 env[1315]: 2025-05-08 00:31:35.454 [INFO][4325] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-bfcrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.478356 env[1315]: 2025-05-08 00:31:35.454 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2529c907700 ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-bfcrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.478356 env[1315]: 2025-05-08 00:31:35.457 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-bfcrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.478356 env[1315]: 2025-05-08 00:31:35.458 [INFO][4325] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-bfcrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0", GenerateName:"calico-apiserver-79bcdbc946-", Namespace:"calico-apiserver", SelfLink:"", UID:"40e1a3ea-b656-44f2-891d-5f464556c5ae", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bcdbc946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318", Pod:"calico-apiserver-79bcdbc946-bfcrq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2529c907700", MAC:"e6:a9:ba:bb:23:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:35.478356 env[1315]: 2025-05-08 00:31:35.469 [INFO][4325] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318" Namespace="calico-apiserver" Pod="calico-apiserver-79bcdbc946-bfcrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:35.482763 env[1315]: time="2025-05-08T00:31:35.482687700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:35.482763 env[1315]: time="2025-05-08T00:31:35.482746341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:35.482983 env[1315]: time="2025-05-08T00:31:35.482938665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:35.481000 audit[4462]: NETFILTER_CFG table=filter:106 family=2 entries=46 op=nft_register_chain pid=4462 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:31:35.483515 env[1315]: time="2025-05-08T00:31:35.483478195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb pid=4453 runtime=io.containerd.runc.v2 May 8 00:31:35.481000 audit[4462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23876 a0=3 a1=ffffc97478d0 a2=0 a3=ffff9ccfdfa8 items=0 ppid=3512 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:35.481000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:31:35.493098 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:31:35.495618 env[1315]: time="2025-05-08T00:31:35.495531666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:31:35.495618 env[1315]: time="2025-05-08T00:31:35.495585547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:31:35.495618 env[1315]: time="2025-05-08T00:31:35.495611508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:31:35.495958 env[1315]: time="2025-05-08T00:31:35.495883553Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318 pid=4495 runtime=io.containerd.runc.v2 May 8 00:31:35.508563 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:31:35.532972 env[1315]: time="2025-05-08T00:31:35.532031524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nfk8b,Uid:6037af79-2659-4a47-9819-2be36a07e900,Namespace:kube-system,Attempt:1,} returns sandbox id \"f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a\"" May 8 00:31:35.533087 kubelet[2224]: E0508 00:31:35.532746 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:35.534648 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:31:35.535330 env[1315]: time="2025-05-08T00:31:35.535228585Z" level=info msg="CreateContainer within sandbox \"f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:31:35.545401 env[1315]: time="2025-05-08T00:31:35.545367059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5zxd,Uid:151f3a99-6667-4e9d-bb95-deb81c9e6f7a,Namespace:kube-system,Attempt:1,} returns sandbox id \"8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb\"" May 8 00:31:35.546224 kubelet[2224]: E0508 00:31:35.546180 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:35.548378 env[1315]: time="2025-05-08T00:31:35.548347636Z" level=info msg="CreateContainer within sandbox \"8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:31:35.555652 env[1315]: time="2025-05-08T00:31:35.555617655Z" level=info msg="CreateContainer within sandbox \"f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d541b87d7ae01142d1aa6aaeef96f07d7e2517761ad5cdd38e839d27135f14d0\"" May 8 00:31:35.556528 env[1315]: time="2025-05-08T00:31:35.556497192Z" level=info msg="StartContainer for \"d541b87d7ae01142d1aa6aaeef96f07d7e2517761ad5cdd38e839d27135f14d0\"" May 8 00:31:35.565092 env[1315]: time="2025-05-08T00:31:35.565054676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bcdbc946-bfcrq,Uid:40e1a3ea-b656-44f2-891d-5f464556c5ae,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318\"" May 8 00:31:35.569184 env[1315]: time="2025-05-08T00:31:35.569139674Z" level=info msg="CreateContainer within sandbox \"8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a1621052eeb660e372d144c383e086a7503aab953944b647b8a1dd0fd161b801\"" May 8 00:31:35.569564 env[1315]: time="2025-05-08T00:31:35.569536161Z" level=info msg="StartContainer for \"a1621052eeb660e372d144c383e086a7503aab953944b647b8a1dd0fd161b801\"" May 8 00:31:35.602195 systemd[1]: run-netns-cni\x2d29fc6d3d\x2dc1a2\x2da573\x2d1e04\x2dec3ec827878b.mount: Deactivated successfully. May 8 00:31:35.624430 env[1315]: time="2025-05-08T00:31:35.624383690Z" level=info msg="StartContainer for \"d541b87d7ae01142d1aa6aaeef96f07d7e2517761ad5cdd38e839d27135f14d0\" returns successfully" May 8 00:31:35.633220 env[1315]: time="2025-05-08T00:31:35.632948374Z" level=info msg="StartContainer for \"a1621052eeb660e372d144c383e086a7503aab953944b647b8a1dd0fd161b801\" returns successfully" May 8 00:31:35.982430 systemd-networkd[1096]: calibcfa9b3d559: Gained IPv6LL May 8 00:31:36.201626 kubelet[2224]: E0508 00:31:36.201540 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:36.203021 kubelet[2224]: E0508 00:31:36.202848 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:36.226350 kubelet[2224]: I0508 00:31:36.226256 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j5zxd" podStartSLOduration=33.226238322 podStartE2EDuration="33.226238322s" podCreationTimestamp="2025-05-08 00:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:31:36.211819371 +0000 UTC m=+47.228498705" watchObservedRunningTime="2025-05-08 00:31:36.226238322 +0000 UTC m=+47.242917656" May 8 00:31:36.248000 audit[4628]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=4628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:36.248000 audit[4628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffde30b530 a2=0 a3=1 items=0 ppid=2386 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:36.248000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:36.253079 kubelet[2224]: I0508 00:31:36.253022 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nfk8b" podStartSLOduration=33.253007865 podStartE2EDuration="33.253007865s" podCreationTimestamp="2025-05-08 00:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:31:36.227118619 +0000 UTC m=+47.243797913" watchObservedRunningTime="2025-05-08 00:31:36.253007865 +0000 UTC m=+47.269687199" May 8 00:31:36.256000 audit[4628]: NETFILTER_CFG table=nat:108 family=2 entries=14 op=nft_register_rule pid=4628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:36.256000 audit[4628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffde30b530 a2=0 a3=1 items=0 ppid=2386 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:36.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:36.272000 audit[4630]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:36.272000 audit[4630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd48b0250 a2=0 a3=1 items=0 ppid=2386 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:36.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:36.286000 audit[4630]: NETFILTER_CFG table=nat:110 family=2 entries=47 op=nft_register_chain pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:36.286000 audit[4630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd48b0250 a2=0 a3=1 items=0 ppid=2386 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:36.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:36.750487 systemd-networkd[1096]: cali533013c8a6b: Gained IPv6LL May 8 00:31:37.070444 systemd-networkd[1096]: cali2529c907700: Gained IPv6LL May 8 00:31:37.209711 kubelet[2224]: E0508 00:31:37.209664 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:37.210614 kubelet[2224]: E0508 00:31:37.210574 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:37.326390 systemd-networkd[1096]: cali7c8e6d14390: Gained IPv6LL May 8 00:31:37.565229 env[1315]: time="2025-05-08T00:31:37.565179118Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:37.567576 env[1315]: time="2025-05-08T00:31:37.567534601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:37.569515 env[1315]: time="2025-05-08T00:31:37.569484957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:37.570822 env[1315]: time="2025-05-08T00:31:37.570789301Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:37.571304 env[1315]: time="2025-05-08T00:31:37.571264310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:31:37.573008 env[1315]: time="2025-05-08T00:31:37.572976422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:31:37.574348 env[1315]: time="2025-05-08T00:31:37.574265206Z" level=info msg="CreateContainer within sandbox \"23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:31:37.586788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655547841.mount: Deactivated successfully. May 8 00:31:37.594863 env[1315]: time="2025-05-08T00:31:37.594799184Z" level=info msg="CreateContainer within sandbox \"23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1df57ad55cceaf5150f5ac6ea5ec5502e174682fc66b8f5604d096197df2951d\"" May 8 00:31:37.595677 env[1315]: time="2025-05-08T00:31:37.595472877Z" level=info msg="StartContainer for \"1df57ad55cceaf5150f5ac6ea5ec5502e174682fc66b8f5604d096197df2951d\"" May 8 00:31:37.683522 env[1315]: time="2025-05-08T00:31:37.683469140Z" level=info msg="StartContainer for \"1df57ad55cceaf5150f5ac6ea5ec5502e174682fc66b8f5604d096197df2951d\" returns successfully" May 8 00:31:38.213452 kubelet[2224]: E0508 00:31:38.213421 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:38.215229 kubelet[2224]: E0508 00:31:38.213831 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:38.235156 kubelet[2224]: I0508 00:31:38.232589 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79bcdbc946-6jhfh" podStartSLOduration=26.094451469 podStartE2EDuration="29.2325666s" podCreationTimestamp="2025-05-08 00:31:09 +0000 UTC" firstStartedPulling="2025-05-08 00:31:34.434489684 +0000 UTC m=+45.451169018" lastFinishedPulling="2025-05-08 00:31:37.572604815 +0000 UTC m=+48.589284149" observedRunningTime="2025-05-08 00:31:38.229854831 +0000 UTC m=+49.246534165" watchObservedRunningTime="2025-05-08 00:31:38.2325666 +0000 UTC m=+49.249245934" May 8 00:31:38.241000 audit[4677]: NETFILTER_CFG table=filter:111 family=2 entries=10 op=nft_register_rule pid=4677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:38.241000 audit[4677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffffa037770 a2=0 a3=1 items=0 ppid=2386 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:38.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:38.245000 audit[4677]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:38.245000 audit[4677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffa037770 a2=0 a3=1 items=0 ppid=2386 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:38.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:38.609827 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:57326.service. May 8 00:31:38.610735 kernel: kauditd_printk_skb: 34 callbacks suppressed May 8 00:31:38.610774 kernel: audit: type=1130 audit(1746664298.608:458): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.15:22-10.0.0.1:57326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:38.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.15:22-10.0.0.1:57326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:38.649614 kubelet[2224]: I0508 00:31:38.649579 2224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:31:38.650385 kubelet[2224]: E0508 00:31:38.650364 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:38.730000 audit[4678]: USER_ACCT pid=4678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.732565 sshd[4678]: Accepted publickey for core from 10.0.0.1 port 57326 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:38.734164 sshd[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:38.732000 audit[4678]: CRED_ACQ pid=4678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.737101 kernel: audit: type=1101 audit(1746664298.730:459): pid=4678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.737232 kernel: audit: type=1103 audit(1746664298.732:460): pid=4678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.738898 kernel: audit: type=1006 audit(1746664298.732:461): pid=4678 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 8 00:31:38.732000 audit[4678]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa80e7e0 a2=3 a3=1 items=0 ppid=1 pid=4678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:38.741964 kernel: audit: type=1300 audit(1746664298.732:461): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa80e7e0 a2=3 a3=1 items=0 ppid=1 pid=4678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:38.732000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:38.742896 kernel: audit: type=1327 audit(1746664298.732:461): proctitle=737368643A20636F7265205B707269765D May 8 00:31:38.753357 systemd-logind[1297]: New session 14 of user core. May 8 00:31:38.754086 systemd[1]: Started session-14.scope. May 8 00:31:38.760000 audit[4678]: USER_START pid=4678 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.766297 kernel: audit: type=1105 audit(1746664298.760:462): pid=4678 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.765000 audit[4703]: CRED_ACQ pid=4703 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.770287 kernel: audit: type=1103 audit(1746664298.765:463): pid=4703 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.790689 systemd[1]: run-containerd-runc-k8s.io-c2df37e6097c8619330671586743a000b3bb35278a5f5f5ea6d9ba90d21a4b92-runc.sBp9XD.mount: Deactivated successfully. May 8 00:31:38.951217 sshd[4678]: pam_unix(sshd:session): session closed for user core May 8 00:31:38.950000 audit[4678]: USER_END pid=4678 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.953673 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:57326.service: Deactivated successfully. May 8 00:31:38.954541 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:31:38.951000 audit[4678]: CRED_DISP pid=4678 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.957894 kernel: audit: type=1106 audit(1746664298.950:464): pid=4678 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.958025 kernel: audit: type=1104 audit(1746664298.951:465): pid=4678 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:38.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.15:22-10.0.0.1:57326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:38.958506 systemd-logind[1297]: Session 14 logged out. Waiting for processes to exit. May 8 00:31:38.959526 systemd-logind[1297]: Removed session 14. May 8 00:31:38.977342 env[1315]: time="2025-05-08T00:31:38.977260952Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:38.978735 env[1315]: time="2025-05-08T00:31:38.978696298Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:38.980529 env[1315]: time="2025-05-08T00:31:38.980497930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:38.983192 env[1315]: time="2025-05-08T00:31:38.983144418Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:38.984014 env[1315]: time="2025-05-08T00:31:38.983958113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 8 00:31:38.987443 env[1315]: time="2025-05-08T00:31:38.987133451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:31:38.987443 env[1315]: time="2025-05-08T00:31:38.987180772Z" level=info msg="CreateContainer within sandbox \"6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:31:39.000830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590121466.mount: Deactivated successfully. May 8 00:31:39.019071 env[1315]: time="2025-05-08T00:31:39.019020864Z" level=info msg="CreateContainer within sandbox \"6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"87d44174b6e83c486a47e2dad2936dc5c5d5c3ebc74a94f0fcd63e280474893e\"" May 8 00:31:39.019760 env[1315]: time="2025-05-08T00:31:39.019712637Z" level=info msg="StartContainer for \"87d44174b6e83c486a47e2dad2936dc5c5d5c3ebc74a94f0fcd63e280474893e\"" May 8 00:31:39.092979 env[1315]: time="2025-05-08T00:31:39.092902024Z" level=info msg="StartContainer for \"87d44174b6e83c486a47e2dad2936dc5c5d5c3ebc74a94f0fcd63e280474893e\" returns successfully" May 8 00:31:39.153603 kubelet[2224]: I0508 00:31:39.153561 2224 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:31:39.156343 kubelet[2224]: I0508 00:31:39.156312 2224 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:31:39.217984 kubelet[2224]: E0508 00:31:39.217874 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:31:39.218981 kubelet[2224]: I0508 00:31:39.218916 2224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:31:39.272672 env[1315]: time="2025-05-08T00:31:39.272630473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:39.274538 env[1315]: time="2025-05-08T00:31:39.274504626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:39.275168 env[1315]: time="2025-05-08T00:31:39.275143918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:39.275915 env[1315]: time="2025-05-08T00:31:39.275889411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:31:39.277173 env[1315]: time="2025-05-08T00:31:39.277142273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:31:39.279165 env[1315]: time="2025-05-08T00:31:39.279135349Z" level=info msg="CreateContainer within sandbox \"993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:31:39.290255 env[1315]: time="2025-05-08T00:31:39.290196067Z" level=info msg="CreateContainer within sandbox \"993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9aa3cb108074db144651be30cf90b259c75afebb33dff905e350bca4e8660327\"" May 8 00:31:39.290785 env[1315]: time="2025-05-08T00:31:39.290760277Z" level=info msg="StartContainer for \"9aa3cb108074db144651be30cf90b259c75afebb33dff905e350bca4e8660327\"" May 8 00:31:39.359327 env[1315]: time="2025-05-08T00:31:39.358530607Z" level=info msg="StartContainer for \"9aa3cb108074db144651be30cf90b259c75afebb33dff905e350bca4e8660327\" returns successfully" May 8 00:31:40.232024 kubelet[2224]: I0508 00:31:40.231952 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-76g2m" podStartSLOduration=24.707774378 podStartE2EDuration="30.231929101s" podCreationTimestamp="2025-05-08 00:31:10 +0000 UTC" firstStartedPulling="2025-05-08 00:31:33.461270377 +0000 UTC m=+44.477949711" lastFinishedPulling="2025-05-08 00:31:38.9854251 +0000 UTC m=+50.002104434" observedRunningTime="2025-05-08 00:31:39.231350696 +0000 UTC m=+50.248030030" watchObservedRunningTime="2025-05-08 00:31:40.231929101 +0000 UTC m=+51.248608435" May 8 00:31:40.241000 audit[4814]: NETFILTER_CFG table=filter:113 family=2 entries=10 op=nft_register_rule pid=4814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:40.241000 audit[4814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffde08a760 a2=0 a3=1 items=0 ppid=2386 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:40.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:40.246000 audit[4814]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:40.246000 audit[4814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffde08a760 a2=0 a3=1 items=0 ppid=2386 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:40.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:41.223033 kubelet[2224]: I0508 00:31:41.222077 2224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:31:43.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.15:22-10.0.0.1:37580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:43.954837 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:37580.service. May 8 00:31:43.955758 kernel: kauditd_printk_skb: 7 callbacks suppressed May 8 00:31:43.955816 kernel: audit: type=1130 audit(1746664303.953:469): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.15:22-10.0.0.1:37580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:44.008000 audit[4818]: USER_ACCT pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.010295 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 37580 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:44.010000 audit[4818]: CRED_ACQ pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.014386 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:44.015203 kernel: audit: type=1101 audit(1746664304.008:470): pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.015251 kernel: audit: type=1103 audit(1746664304.010:471): pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.016889 kernel: audit: type=1006 audit(1746664304.010:472): pid=4818 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 8 00:31:44.010000 audit[4818]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffcedfe10 a2=3 a3=1 items=0 ppid=1 pid=4818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:44.019850 kernel: audit: type=1300 audit(1746664304.010:472): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffcedfe10 a2=3 a3=1 items=0 ppid=1 pid=4818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:44.010000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:44.020997 kernel: audit: type=1327 audit(1746664304.010:472): proctitle=737368643A20636F7265205B707269765D May 8 00:31:44.023081 systemd-logind[1297]: New session 15 of user core. May 8 00:31:44.023832 systemd[1]: Started session-15.scope. May 8 00:31:44.026000 audit[4818]: USER_START pid=4818 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.031407 kernel: audit: type=1105 audit(1746664304.026:473): pid=4818 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.030000 audit[4821]: CRED_ACQ pid=4821 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.034305 kernel: audit: type=1103 audit(1746664304.030:474): pid=4821 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.188707 sshd[4818]: pam_unix(sshd:session): session closed for user core May 8 00:31:44.188000 audit[4818]: USER_END pid=4818 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.188000 audit[4818]: CRED_DISP pid=4818 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.191687 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:37580.service: Deactivated successfully. May 8 00:31:44.193163 systemd-logind[1297]: Session 15 logged out. Waiting for processes to exit. May 8 00:31:44.193178 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:31:44.194233 systemd-logind[1297]: Removed session 15. May 8 00:31:44.194847 kernel: audit: type=1106 audit(1746664304.188:475): pid=4818 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.194906 kernel: audit: type=1104 audit(1746664304.188:476): pid=4818 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:44.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.15:22-10.0.0.1:37580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:45.281292 kubelet[2224]: I0508 00:31:45.281191 2224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:31:45.302850 kubelet[2224]: I0508 00:31:45.302779 2224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79bcdbc946-bfcrq" podStartSLOduration=32.591275436 podStartE2EDuration="36.302760822s" podCreationTimestamp="2025-05-08 00:31:09 +0000 UTC" firstStartedPulling="2025-05-08 00:31:35.566388621 +0000 UTC m=+46.583067955" lastFinishedPulling="2025-05-08 00:31:39.277874007 +0000 UTC m=+50.294553341" observedRunningTime="2025-05-08 00:31:40.233326845 +0000 UTC m=+51.250006179" watchObservedRunningTime="2025-05-08 00:31:45.302760822 +0000 UTC m=+56.319440156" May 8 00:31:45.322000 audit[4834]: NETFILTER_CFG table=filter:115 family=2 entries=9 op=nft_register_rule pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:45.322000 audit[4834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd783ddc0 a2=0 a3=1 items=0 ppid=2386 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:45.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:45.334000 audit[4834]: NETFILTER_CFG table=nat:116 family=2 entries=27 op=nft_register_chain pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:45.334000 audit[4834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffd783ddc0 a2=0 a3=1 items=0 ppid=2386 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:45.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:49.051630 env[1315]: time="2025-05-08T00:31:49.051592742Z" level=info msg="StopPodSandbox for \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\"" May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.090 [WARNING][4858] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6037af79-2659-4a47-9819-2be36a07e900", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a", Pod:"coredns-7db6d8ff4d-nfk8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c8e6d14390", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.090 [INFO][4858] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.090 [INFO][4858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" iface="eth0" netns="" May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.090 [INFO][4858] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.090 [INFO][4858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.114 [INFO][4869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" HandleID="k8s-pod-network.ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.114 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.114 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.123 [WARNING][4869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" HandleID="k8s-pod-network.ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.123 [INFO][4869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" HandleID="k8s-pod-network.ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.125 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.131329 env[1315]: 2025-05-08 00:31:49.128 [INFO][4858] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:49.131771 env[1315]: time="2025-05-08T00:31:49.131360803Z" level=info msg="TearDown network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\" successfully" May 8 00:31:49.131771 env[1315]: time="2025-05-08T00:31:49.131393404Z" level=info msg="StopPodSandbox for \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\" returns successfully" May 8 00:31:49.131951 env[1315]: time="2025-05-08T00:31:49.131919332Z" level=info msg="RemovePodSandbox for \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\"" May 8 00:31:49.131994 env[1315]: time="2025-05-08T00:31:49.131959053Z" level=info msg="Forcibly stopping sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\"" May 8 00:31:49.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.15:22-10.0.0.1:37586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:49.192568 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:37586.service. May 8 00:31:49.195573 kernel: kauditd_printk_skb: 7 callbacks suppressed May 8 00:31:49.195603 kernel: audit: type=1130 audit(1746664309.191:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.15:22-10.0.0.1:37586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.169 [WARNING][4893] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6037af79-2659-4a47-9819-2be36a07e900", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f20d5a95ddbcb4576f0c869876a2c2084b1d9964bbebffd057c52c0d118e4a6a", Pod:"coredns-7db6d8ff4d-nfk8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c8e6d14390", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.170 [INFO][4893] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.170 [INFO][4893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" iface="eth0" netns="" May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.170 [INFO][4893] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.170 [INFO][4893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.190 [INFO][4902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" HandleID="k8s-pod-network.ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.190 [INFO][4902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.190 [INFO][4902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.200 [WARNING][4902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" HandleID="k8s-pod-network.ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.200 [INFO][4902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" HandleID="k8s-pod-network.ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" Workload="localhost-k8s-coredns--7db6d8ff4d--nfk8b-eth0" May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.201 [INFO][4902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.206149 env[1315]: 2025-05-08 00:31:49.204 [INFO][4893] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20" May 8 00:31:49.206623 env[1315]: time="2025-05-08T00:31:49.206183466Z" level=info msg="TearDown network for sandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\" successfully" May 8 00:31:49.210301 env[1315]: time="2025-05-08T00:31:49.210101168Z" level=info msg="RemovePodSandbox \"ffab3741c3d73156b4d738b7dd2341a97a3b14bffcfe3c4a57ff7afda027fb20\" returns successfully" May 8 00:31:49.213728 env[1315]: time="2025-05-08T00:31:49.213693545Z" level=info msg="StopPodSandbox for \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\"" May 8 00:31:49.252041 kernel: audit: type=1101 audit(1746664309.242:481): pid=4910 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.252145 kernel: audit: type=1103 audit(1746664309.243:482): pid=4910 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.252190 kernel: audit: type=1006 audit(1746664309.243:483): pid=4910 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 8 00:31:49.252224 kernel: audit: type=1300 audit(1746664309.243:483): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7b45870 a2=3 a3=1 items=0 ppid=1 pid=4910 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:49.252248 kernel: audit: type=1327 audit(1746664309.243:483): proctitle=737368643A20636F7265205B707269765D May 8 00:31:49.242000 audit[4910]: USER_ACCT pid=4910 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.243000 audit[4910]: CRED_ACQ pid=4910 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.243000 audit[4910]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7b45870 a2=3 a3=1 items=0 ppid=1 pid=4910 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:49.243000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:49.245431 sshd[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:49.252758 sshd[4910]: Accepted publickey for core from 10.0.0.1 port 37586 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:49.256097 systemd-logind[1297]: New session 16 of user core. May 8 00:31:49.257020 systemd[1]: Started session-16.scope. May 8 00:31:49.260000 audit[4910]: USER_START pid=4910 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.265000 audit[4936]: CRED_ACQ pid=4936 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.269024 kernel: audit: type=1105 audit(1746664309.260:484): pid=4910 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.269088 kernel: audit: type=1103 audit(1746664309.265:485): pid=4936 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.275 [WARNING][4927] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--76g2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2615509-fc42-4214-b9b8-44dfb15979ff", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce", Pod:"csi-node-driver-76g2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63a9e7d6c72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.276 [INFO][4927] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.276 [INFO][4927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" iface="eth0" netns="" May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.276 [INFO][4927] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.276 [INFO][4927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.295 [INFO][4938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" HandleID="k8s-pod-network.3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.296 [INFO][4938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.296 [INFO][4938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.304 [WARNING][4938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" HandleID="k8s-pod-network.3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.304 [INFO][4938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" HandleID="k8s-pod-network.3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.305 [INFO][4938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.310721 env[1315]: 2025-05-08 00:31:49.307 [INFO][4927] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:49.310721 env[1315]: time="2025-05-08T00:31:49.310682878Z" level=info msg="TearDown network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\" successfully" May 8 00:31:49.310721 env[1315]: time="2025-05-08T00:31:49.310719839Z" level=info msg="StopPodSandbox for \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\" returns successfully" May 8 00:31:49.311476 env[1315]: time="2025-05-08T00:31:49.311446170Z" level=info msg="RemovePodSandbox for \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\"" May 8 00:31:49.311679 env[1315]: time="2025-05-08T00:31:49.311608413Z" level=info msg="Forcibly stopping sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\"" May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.360 [WARNING][4969] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--76g2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2615509-fc42-4214-b9b8-44dfb15979ff", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6358351b029cb543f5a42d8b126826d01434b7fb38ae798c0d4ed119a88770ce", Pod:"csi-node-driver-76g2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63a9e7d6c72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.360 [INFO][4969] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.360 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" iface="eth0" netns="" May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.360 [INFO][4969] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.360 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.384 [INFO][4978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" HandleID="k8s-pod-network.3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.384 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.384 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.394 [WARNING][4978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" HandleID="k8s-pod-network.3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.394 [INFO][4978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" HandleID="k8s-pod-network.3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" Workload="localhost-k8s-csi--node--driver--76g2m-eth0" May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.396 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.400233 env[1315]: 2025-05-08 00:31:49.398 [INFO][4969] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276" May 8 00:31:49.400827 env[1315]: time="2025-05-08T00:31:49.400790383Z" level=info msg="TearDown network for sandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\" successfully" May 8 00:31:49.407932 sshd[4910]: pam_unix(sshd:session): session closed for user core May 8 00:31:49.408743 env[1315]: time="2025-05-08T00:31:49.408702108Z" level=info msg="RemovePodSandbox \"3d14dd764909b148cb03ec8e995364976951e1f0e752e6003274c26746bff276\" returns successfully" May 8 00:31:49.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.15:22-10.0.0.1:37598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:49.410516 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:37598.service. May 8 00:31:49.409000 audit[4910]: USER_END pid=4910 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.414554 systemd-logind[1297]: Session 16 logged out. Waiting for processes to exit. May 8 00:31:49.416404 kernel: audit: type=1130 audit(1746664309.409:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.15:22-10.0.0.1:37598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:49.416508 kernel: audit: type=1106 audit(1746664309.409:487): pid=4910 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.409000 audit[4910]: CRED_DISP pid=4910 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.15:22-10.0.0.1:37586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:49.415247 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:37586.service: Deactivated successfully. May 8 00:31:49.416718 env[1315]: time="2025-05-08T00:31:49.416412150Z" level=info msg="StopPodSandbox for \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\"" May 8 00:31:49.416146 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:31:49.417681 systemd-logind[1297]: Removed session 16. May 8 00:31:49.454000 audit[4986]: USER_ACCT pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.456973 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 37598 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:49.456000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.456000 audit[4986]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee31b9f0 a2=3 a3=1 items=0 ppid=1 pid=4986 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:49.456000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:49.458156 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:49.462354 systemd-logind[1297]: New session 17 of user core. May 8 00:31:49.462780 systemd[1]: Started session-17.scope. May 8 00:31:49.466000 audit[4986]: USER_START pid=4986 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.468000 audit[5022]: CRED_ACQ pid=5022 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.457 [WARNING][5007] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0", GenerateName:"calico-apiserver-79bcdbc946-", Namespace:"calico-apiserver", SelfLink:"", UID:"40e1a3ea-b656-44f2-891d-5f464556c5ae", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bcdbc946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318", Pod:"calico-apiserver-79bcdbc946-bfcrq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2529c907700", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.458 [INFO][5007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.458 [INFO][5007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" iface="eth0" netns="" May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.458 [INFO][5007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.458 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.486 [INFO][5016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" HandleID="k8s-pod-network.f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.486 [INFO][5016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.486 [INFO][5016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.494 [WARNING][5016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" HandleID="k8s-pod-network.f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.494 [INFO][5016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" HandleID="k8s-pod-network.f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.495 [INFO][5016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.499689 env[1315]: 2025-05-08 00:31:49.497 [INFO][5007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:49.500140 env[1315]: time="2025-05-08T00:31:49.499721747Z" level=info msg="TearDown network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\" successfully" May 8 00:31:49.500140 env[1315]: time="2025-05-08T00:31:49.499752628Z" level=info msg="StopPodSandbox for \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\" returns successfully" May 8 00:31:49.500239 env[1315]: time="2025-05-08T00:31:49.500193635Z" level=info msg="RemovePodSandbox for \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\"" May 8 00:31:49.500290 env[1315]: time="2025-05-08T00:31:49.500236275Z" level=info msg="Forcibly stopping sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\"" May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.540 [WARNING][5040] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0", GenerateName:"calico-apiserver-79bcdbc946-", Namespace:"calico-apiserver", SelfLink:"", UID:"40e1a3ea-b656-44f2-891d-5f464556c5ae", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bcdbc946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"993ff824b3272fc6b4f3bcf078b5b124b8c495c4e588a91d76d3966e111b7318", Pod:"calico-apiserver-79bcdbc946-bfcrq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2529c907700", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.540 [INFO][5040] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.541 [INFO][5040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" iface="eth0" netns="" May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.541 [INFO][5040] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.541 [INFO][5040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.572 [INFO][5053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" HandleID="k8s-pod-network.f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.572 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.573 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.581 [WARNING][5053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" HandleID="k8s-pod-network.f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.581 [INFO][5053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" HandleID="k8s-pod-network.f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" Workload="localhost-k8s-calico--apiserver--79bcdbc946--bfcrq-eth0" May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.583 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.588009 env[1315]: 2025-05-08 00:31:49.585 [INFO][5040] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859" May 8 00:31:49.588009 env[1315]: time="2025-05-08T00:31:49.587969303Z" level=info msg="TearDown network for sandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\" successfully" May 8 00:31:49.591018 env[1315]: time="2025-05-08T00:31:49.590941590Z" level=info msg="RemovePodSandbox \"f195b4d1d4be24ea505601cd1ed23897d7abc39236c254843cf74a097320e859\" returns successfully" May 8 00:31:49.591587 env[1315]: time="2025-05-08T00:31:49.591560359Z" level=info msg="StopPodSandbox for \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\"" May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.629 [WARNING][5077] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0", GenerateName:"calico-apiserver-79bcdbc946-", Namespace:"calico-apiserver", SelfLink:"", UID:"244b8596-ee88-4b1a-879a-7c87e073db5b", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bcdbc946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754", Pod:"calico-apiserver-79bcdbc946-6jhfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibcfa9b3d559", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.629 [INFO][5077] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.629 [INFO][5077] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" iface="eth0" netns="" May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.629 [INFO][5077] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.629 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.650 [INFO][5086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" HandleID="k8s-pod-network.a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.650 [INFO][5086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.650 [INFO][5086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.658 [WARNING][5086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" HandleID="k8s-pod-network.a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.658 [INFO][5086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" HandleID="k8s-pod-network.a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.660 [INFO][5086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.664724 env[1315]: 2025-05-08 00:31:49.663 [INFO][5077] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:49.665340 env[1315]: time="2025-05-08T00:31:49.665305485Z" level=info msg="TearDown network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\" successfully" May 8 00:31:49.665423 env[1315]: time="2025-05-08T00:31:49.665405087Z" level=info msg="StopPodSandbox for \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\" returns successfully" May 8 00:31:49.665970 env[1315]: time="2025-05-08T00:31:49.665940335Z" level=info msg="RemovePodSandbox for \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\"" May 8 00:31:49.666227 env[1315]: time="2025-05-08T00:31:49.666173259Z" level=info msg="Forcibly stopping sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\"" May 8 00:31:49.727000 audit[4986]: USER_END pid=4986 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.727000 audit[4986]: CRED_DISP pid=4986 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.727432 sshd[4986]: pam_unix(sshd:session): session closed for user core May 8 00:31:49.729826 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:37606.service. May 8 00:31:49.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.15:22-10.0.0.1:37606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:49.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.15:22-10.0.0.1:37598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:49.731620 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:37598.service: Deactivated successfully. May 8 00:31:49.732700 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:31:49.732711 systemd-logind[1297]: Session 17 logged out. Waiting for processes to exit. May 8 00:31:49.738256 systemd-logind[1297]: Removed session 17. May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.701 [WARNING][5108] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0", GenerateName:"calico-apiserver-79bcdbc946-", Namespace:"calico-apiserver", SelfLink:"", UID:"244b8596-ee88-4b1a-879a-7c87e073db5b", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bcdbc946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23cda9dc05d32a354b16f202d1814d5c693f154af5ce4a69500c9aad2e8a1754", Pod:"calico-apiserver-79bcdbc946-6jhfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibcfa9b3d559", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.702 [INFO][5108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.702 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" iface="eth0" netns="" May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.702 [INFO][5108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.702 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.730 [INFO][5117] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" HandleID="k8s-pod-network.a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.730 [INFO][5117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.730 [INFO][5117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.741 [WARNING][5117] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" HandleID="k8s-pod-network.a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.741 [INFO][5117] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" HandleID="k8s-pod-network.a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" Workload="localhost-k8s-calico--apiserver--79bcdbc946--6jhfh-eth0" May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.742 [INFO][5117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.749055 env[1315]: 2025-05-08 00:31:49.747 [INFO][5108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93" May 8 00:31:49.749518 env[1315]: time="2025-05-08T00:31:49.749102290Z" level=info msg="TearDown network for sandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\" successfully" May 8 00:31:49.755578 env[1315]: time="2025-05-08T00:31:49.755471911Z" level=info msg="RemovePodSandbox \"a05c4c1443bc47f9b824e93e7e2ce063a3a3a3a5465602253120f5ad57f63d93\" returns successfully" May 8 00:31:49.756169 env[1315]: time="2025-05-08T00:31:49.756136281Z" level=info msg="StopPodSandbox for \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\"" May 8 00:31:49.782000 audit[5125]: USER_ACCT pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.784510 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 37606 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:49.784000 audit[5125]: CRED_ACQ pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.784000 audit[5125]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd075a400 a2=3 a3=1 items=0 ppid=1 pid=5125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:49.784000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:49.785885 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:49.790312 systemd-logind[1297]: New session 18 of user core. May 8 00:31:49.790724 systemd[1]: Started session-18.scope. May 8 00:31:49.794000 audit[5125]: USER_START pid=5125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.796000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:49.809749 systemd[1]: run-containerd-runc-k8s.io-a13e506958b4d76fe2a90ca6344b506f8c3f4e77432bab5b3869af00282a2960-runc.N8EOXR.mount: Deactivated successfully. May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.815 [WARNING][5144] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"151f3a99-6667-4e9d-bb95-deb81c9e6f7a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb", Pod:"coredns-7db6d8ff4d-j5zxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali533013c8a6b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.816 [INFO][5144] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.816 [INFO][5144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" iface="eth0" netns="" May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.816 [INFO][5144] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.816 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.841 [INFO][5166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" HandleID="k8s-pod-network.0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.841 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.841 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.852 [WARNING][5166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" HandleID="k8s-pod-network.0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.852 [INFO][5166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" HandleID="k8s-pod-network.0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.856 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.863480 env[1315]: 2025-05-08 00:31:49.859 [INFO][5144] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:49.863480 env[1315]: time="2025-05-08T00:31:49.863419458Z" level=info msg="TearDown network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\" successfully" May 8 00:31:49.863480 env[1315]: time="2025-05-08T00:31:49.863453898Z" level=info msg="StopPodSandbox for \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\" returns successfully" May 8 00:31:49.864093 env[1315]: time="2025-05-08T00:31:49.863935586Z" level=info msg="RemovePodSandbox for \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\"" May 8 00:31:49.864093 env[1315]: time="2025-05-08T00:31:49.863967266Z" level=info msg="Forcibly stopping sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\"" May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.914 [WARNING][5219] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"151f3a99-6667-4e9d-bb95-deb81c9e6f7a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8048d4a471efd072635d90b7822dade7e465f55028054d51d1aa0e32aff271bb", Pod:"coredns-7db6d8ff4d-j5zxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali533013c8a6b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.914 [INFO][5219] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.914 [INFO][5219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" iface="eth0" netns="" May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.914 [INFO][5219] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.914 [INFO][5219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.934 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" HandleID="k8s-pod-network.0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.935 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.935 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.948 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" HandleID="k8s-pod-network.0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.948 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" HandleID="k8s-pod-network.0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" Workload="localhost-k8s-coredns--7db6d8ff4d--j5zxd-eth0" May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.951 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:49.955975 env[1315]: 2025-05-08 00:31:49.954 [INFO][5219] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648" May 8 00:31:49.956443 env[1315]: time="2025-05-08T00:31:49.956035802Z" level=info msg="TearDown network for sandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\" successfully" May 8 00:31:49.959230 env[1315]: time="2025-05-08T00:31:49.959176972Z" level=info msg="RemovePodSandbox \"0f7525820af1e617113d3b03b88e52374e7dbd580b9cb75447081cb1f14d7648\" returns successfully" May 8 00:31:49.959687 env[1315]: time="2025-05-08T00:31:49.959664219Z" level=info msg="StopPodSandbox for \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\"" May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:49.995 [WARNING][5255] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0", GenerateName:"calico-kube-controllers-756d5447f-", Namespace:"calico-system", SelfLink:"", UID:"bce8cf8f-fe61-4c34-96ae-8a08509a41ec", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756d5447f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4", Pod:"calico-kube-controllers-756d5447f-sn9fj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ce38bdc3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:49.996 [INFO][5255] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:49.996 [INFO][5255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" iface="eth0" netns="" May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:49.996 [INFO][5255] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:49.996 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:50.020 [INFO][5263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" HandleID="k8s-pod-network.519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:50.020 [INFO][5263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:50.020 [INFO][5263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:50.029 [WARNING][5263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" HandleID="k8s-pod-network.519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:50.029 [INFO][5263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" HandleID="k8s-pod-network.519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:50.031 [INFO][5263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:50.034743 env[1315]: 2025-05-08 00:31:50.033 [INFO][5255] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:50.035241 env[1315]: time="2025-05-08T00:31:50.034765162Z" level=info msg="TearDown network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\" successfully" May 8 00:31:50.035241 env[1315]: time="2025-05-08T00:31:50.034799083Z" level=info msg="StopPodSandbox for \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\" returns successfully" May 8 00:31:50.035304 env[1315]: time="2025-05-08T00:31:50.035232089Z" level=info msg="RemovePodSandbox for \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\"" May 8 00:31:50.035304 env[1315]: time="2025-05-08T00:31:50.035262450Z" level=info msg="Forcibly stopping sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\"" May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.072 [WARNING][5289] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0", GenerateName:"calico-kube-controllers-756d5447f-", Namespace:"calico-system", SelfLink:"", UID:"bce8cf8f-fe61-4c34-96ae-8a08509a41ec", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756d5447f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de84817ac91bebf3ed300d1ad630302bf8ddd2e1fd2f8dd80e1a458242d7ebe4", Pod:"calico-kube-controllers-756d5447f-sn9fj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ce38bdc3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.073 [INFO][5289] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.073 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" iface="eth0" netns="" May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.073 [INFO][5289] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.073 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.093 [INFO][5297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" HandleID="k8s-pod-network.519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.094 [INFO][5297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.094 [INFO][5297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.102 [WARNING][5297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" HandleID="k8s-pod-network.519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.102 [INFO][5297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" HandleID="k8s-pod-network.519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" Workload="localhost-k8s-calico--kube--controllers--756d5447f--sn9fj-eth0" May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.104 [INFO][5297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:31:50.107731 env[1315]: 2025-05-08 00:31:50.106 [INFO][5289] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c" May 8 00:31:50.108364 env[1315]: time="2025-05-08T00:31:50.107763106Z" level=info msg="TearDown network for sandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\" successfully" May 8 00:31:50.110416 env[1315]: time="2025-05-08T00:31:50.110371947Z" level=info msg="RemovePodSandbox \"519cca8d30fa769936133bbc5f7357f6e4a1885ffc8ceac723340cbcfb35811c\" returns successfully" May 8 00:31:50.803628 systemd[1]: run-containerd-runc-k8s.io-a13e506958b4d76fe2a90ca6344b506f8c3f4e77432bab5b3869af00282a2960-runc.POLOY3.mount: Deactivated successfully. May 8 00:31:51.381000 audit[5313]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:51.381000 audit[5313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffc31cfcf0 a2=0 a3=1 items=0 ppid=2386 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:51.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:51.388457 sshd[5125]: pam_unix(sshd:session): session closed for user core May 8 00:31:51.388000 audit[5125]: USER_END pid=5125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.388000 audit[5125]: CRED_DISP pid=5125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.390651 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:37610.service. May 8 00:31:51.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.15:22-10.0.0.1:37610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:51.391747 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:37606.service: Deactivated successfully. May 8 00:31:51.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.15:22-10.0.0.1:37606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:51.392000 audit[5313]: NETFILTER_CFG table=nat:118 family=2 entries=22 op=nft_register_rule pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:51.392000 audit[5313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffc31cfcf0 a2=0 a3=1 items=0 ppid=2386 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:51.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:51.394646 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:31:51.395178 systemd-logind[1297]: Session 18 logged out. Waiting for processes to exit. May 8 00:31:51.396196 systemd-logind[1297]: Removed session 18. May 8 00:31:51.411000 audit[5319]: NETFILTER_CFG table=filter:119 family=2 entries=32 op=nft_register_rule pid=5319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:51.411000 audit[5319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffd0000780 a2=0 a3=1 items=0 ppid=2386 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:51.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:51.417000 audit[5319]: NETFILTER_CFG table=nat:120 family=2 entries=22 op=nft_register_rule pid=5319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:51.417000 audit[5319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffd0000780 a2=0 a3=1 items=0 ppid=2386 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:51.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:51.438000 audit[5314]: USER_ACCT pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.439849 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 37610 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:51.439000 audit[5314]: CRED_ACQ pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.439000 audit[5314]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbb0d3a0 a2=3 a3=1 items=0 ppid=1 pid=5314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:51.439000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:51.441042 sshd[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:51.445503 systemd-logind[1297]: New session 19 of user core. May 8 00:31:51.445560 systemd[1]: Started session-19.scope. May 8 00:31:51.448000 audit[5314]: USER_START pid=5314 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.449000 audit[5321]: CRED_ACQ pid=5321 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.690112 sshd[5314]: pam_unix(sshd:session): session closed for user core May 8 00:31:51.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.15:22-10.0.0.1:37622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:51.692401 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:37622.service. May 8 00:31:51.691000 audit[5314]: USER_END pid=5314 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.691000 audit[5314]: CRED_DISP pid=5314 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.694541 systemd-logind[1297]: Session 19 logged out. Waiting for processes to exit. May 8 00:31:51.695151 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:37610.service: Deactivated successfully. May 8 00:31:51.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.15:22-10.0.0.1:37610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:51.695969 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:31:51.696768 systemd-logind[1297]: Removed session 19. May 8 00:31:51.735000 audit[5329]: USER_ACCT pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.736960 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 37622 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:51.736000 audit[5329]: CRED_ACQ pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.736000 audit[5329]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe29b02c0 a2=3 a3=1 items=0 ppid=1 pid=5329 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:51.736000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:51.738570 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:51.741930 systemd-logind[1297]: New session 20 of user core. May 8 00:31:51.742772 systemd[1]: Started session-20.scope. May 8 00:31:51.745000 audit[5329]: USER_START pid=5329 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.747000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.873446 sshd[5329]: pam_unix(sshd:session): session closed for user core May 8 00:31:51.873000 audit[5329]: USER_END pid=5329 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.873000 audit[5329]: CRED_DISP pid=5329 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:51.876064 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:37622.service: Deactivated successfully. May 8 00:31:51.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.15:22-10.0.0.1:37622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:51.877033 systemd-logind[1297]: Session 20 logged out. Waiting for processes to exit. May 8 00:31:51.877105 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:31:51.878190 systemd-logind[1297]: Removed session 20. May 8 00:31:56.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.15:22-10.0.0.1:56130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:56.876228 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:56130.service. May 8 00:31:56.878027 kernel: kauditd_printk_skb: 57 callbacks suppressed May 8 00:31:56.878110 kernel: audit: type=1130 audit(1746664316.875:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.15:22-10.0.0.1:56130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:56.920000 audit[5348]: USER_ACCT pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:56.922371 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 56130 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:31:56.924073 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:31:56.922000 audit[5348]: CRED_ACQ pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:56.927263 kernel: audit: type=1101 audit(1746664316.920:530): pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:56.927322 kernel: audit: type=1103 audit(1746664316.922:531): pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:56.927350 kernel: audit: type=1006 audit(1746664316.922:532): pid=5348 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 May 8 00:31:56.928655 kernel: audit: type=1300 audit(1746664316.922:532): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe795fa90 a2=3 a3=1 items=0 ppid=1 pid=5348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:56.922000 audit[5348]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe795fa90 a2=3 a3=1 items=0 ppid=1 pid=5348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:56.929443 systemd-logind[1297]: New session 21 of user core. May 8 00:31:56.930036 systemd[1]: Started session-21.scope. May 8 00:31:56.932192 kernel: audit: type=1327 audit(1746664316.922:532): proctitle=737368643A20636F7265205B707269765D May 8 00:31:56.922000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:31:56.934000 audit[5348]: USER_START pid=5348 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:56.935000 audit[5351]: CRED_ACQ pid=5351 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:56.940541 kernel: audit: type=1105 audit(1746664316.934:533): pid=5348 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:56.940598 kernel: audit: type=1103 audit(1746664316.935:534): pid=5351 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:56.961000 audit[5353]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:56.961000 audit[5353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc57e1120 a2=0 a3=1 items=0 ppid=2386 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:56.968149 kernel: audit: type=1325 audit(1746664316.961:535): table=filter:121 family=2 entries=20 op=nft_register_rule pid=5353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:56.968202 kernel: audit: type=1300 audit(1746664316.961:535): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc57e1120 a2=0 a3=1 items=0 ppid=2386 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:56.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:56.972000 audit[5353]: NETFILTER_CFG table=nat:122 family=2 entries=106 op=nft_register_chain pid=5353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:31:56.972000 audit[5353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffc57e1120 a2=0 a3=1 items=0 ppid=2386 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:31:56.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:31:57.053560 sshd[5348]: pam_unix(sshd:session): session closed for user core May 8 00:31:57.054000 audit[5348]: USER_END pid=5348 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:57.054000 audit[5348]: CRED_DISP pid=5348 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:31:57.057161 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:56130.service: Deactivated successfully. May 8 00:31:57.058131 systemd-logind[1297]: Session 21 logged out. Waiting for processes to exit. May 8 00:31:57.058225 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:31:57.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.15:22-10.0.0.1:56130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:31:57.058953 systemd-logind[1297]: Removed session 21. May 8 00:32:00.072693 kubelet[2224]: E0508 00:32:00.072641 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:32:02.057056 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:56136.service. May 8 00:32:02.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.15:22-10.0.0.1:56136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:32:02.057750 kernel: kauditd_printk_skb: 7 callbacks suppressed May 8 00:32:02.057808 kernel: audit: type=1130 audit(1746664322.055:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.15:22-10.0.0.1:56136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:32:02.104000 audit[5366]: USER_ACCT pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.105906 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 56136 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:32:02.108297 kernel: audit: type=1101 audit(1746664322.104:541): pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.107000 audit[5366]: CRED_ACQ pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.109551 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:32:02.113167 kernel: audit: type=1103 audit(1746664322.107:542): pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.113243 kernel: audit: type=1006 audit(1746664322.107:543): pid=5366 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 May 8 00:32:02.113277 kernel: audit: type=1300 audit(1746664322.107:543): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd514c010 a2=3 a3=1 items=0 ppid=1 pid=5366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:32:02.107000 audit[5366]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd514c010 a2=3 a3=1 items=0 ppid=1 pid=5366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:32:02.107000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:32:02.116637 kernel: audit: type=1327 audit(1746664322.107:543): proctitle=737368643A20636F7265205B707269765D May 8 00:32:02.116367 systemd-logind[1297]: New session 22 of user core. May 8 00:32:02.117184 systemd[1]: Started session-22.scope. May 8 00:32:02.120000 audit[5366]: USER_START pid=5366 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.123000 audit[5369]: CRED_ACQ pid=5369 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.127523 kernel: audit: type=1105 audit(1746664322.120:544): pid=5366 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.127582 kernel: audit: type=1103 audit(1746664322.123:545): pid=5369 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.243582 sshd[5366]: pam_unix(sshd:session): session closed for user core May 8 00:32:02.243000 audit[5366]: USER_END pid=5366 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.246057 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:56136.service: Deactivated successfully. May 8 00:32:02.247053 systemd-logind[1297]: Session 22 logged out. Waiting for processes to exit. May 8 00:32:02.247086 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:32:02.243000 audit[5366]: CRED_DISP pid=5366 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.247963 systemd-logind[1297]: Removed session 22. May 8 00:32:02.249530 kernel: audit: type=1106 audit(1746664322.243:546): pid=5366 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.249592 kernel: audit: type=1104 audit(1746664322.243:547): pid=5366 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:02.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.15:22-10.0.0.1:56136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:32:06.072785 kubelet[2224]: E0508 00:32:06.072747 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:32:07.072927 kubelet[2224]: E0508 00:32:07.072894 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:32:07.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.15:22-10.0.0.1:39350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:32:07.246588 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:39350.service. May 8 00:32:07.247308 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:32:07.247348 kernel: audit: type=1130 audit(1746664327.245:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.15:22-10.0.0.1:39350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:32:07.288000 audit[5382]: USER_ACCT pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.290161 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 39350 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:32:07.291490 sshd[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:32:07.289000 audit[5382]: CRED_ACQ pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.295604 kernel: audit: type=1101 audit(1746664327.288:550): pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.295683 kernel: audit: type=1103 audit(1746664327.289:551): pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.295714 kernel: audit: type=1006 audit(1746664327.289:552): pid=5382 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 8 00:32:07.296089 systemd[1]: Started session-23.scope. May 8 00:32:07.296292 systemd-logind[1297]: New session 23 of user core. May 8 00:32:07.289000 audit[5382]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8955450 a2=3 a3=1 items=0 ppid=1 pid=5382 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:32:07.300016 kernel: audit: type=1300 audit(1746664327.289:552): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8955450 a2=3 a3=1 items=0 ppid=1 pid=5382 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:32:07.289000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:32:07.301089 kernel: audit: type=1327 audit(1746664327.289:552): proctitle=737368643A20636F7265205B707269765D May 8 00:32:07.301119 kernel: audit: type=1105 audit(1746664327.299:553): pid=5382 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.299000 audit[5382]: USER_START pid=5382 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.301000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.306808 kernel: audit: type=1103 audit(1746664327.301:554): pid=5385 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.434401 sshd[5382]: pam_unix(sshd:session): session closed for user core May 8 00:32:07.434000 audit[5382]: USER_END pid=5382 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.438146 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:39350.service: Deactivated successfully. May 8 00:32:07.434000 audit[5382]: CRED_DISP pid=5382 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.439192 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:32:07.439231 systemd-logind[1297]: Session 23 logged out. Waiting for processes to exit. May 8 00:32:07.439944 systemd-logind[1297]: Removed session 23. May 8 00:32:07.441955 kernel: audit: type=1106 audit(1746664327.434:555): pid=5382 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.442045 kernel: audit: type=1104 audit(1746664327.434:556): pid=5382 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:07.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.15:22-10.0.0.1:39350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:32:08.665544 systemd[1]: run-containerd-runc-k8s.io-c2df37e6097c8619330671586743a000b3bb35278a5f5f5ea6d9ba90d21a4b92-runc.4I7266.mount: Deactivated successfully. May 8 00:32:12.073081 kubelet[2224]: E0508 00:32:12.073035 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:32:12.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.15:22-10.0.0.1:39360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:32:12.437747 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:39360.service. May 8 00:32:12.438531 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:32:12.438585 kernel: audit: type=1130 audit(1746664332.436:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.15:22-10.0.0.1:39360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:32:12.479000 audit[5424]: USER_ACCT pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.480964 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 39360 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:32:12.482405 sshd[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:32:12.480000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.485496 kernel: audit: type=1101 audit(1746664332.479:559): pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.485553 kernel: audit: type=1103 audit(1746664332.480:560): pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.485582 kernel: audit: type=1006 audit(1746664332.480:561): pid=5424 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 8 00:32:12.486432 systemd[1]: Started session-24.scope. May 8 00:32:12.486621 systemd-logind[1297]: New session 24 of user core. May 8 00:32:12.486928 kernel: audit: type=1300 audit(1746664332.480:561): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc49ad460 a2=3 a3=1 items=0 ppid=1 pid=5424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:32:12.480000 audit[5424]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc49ad460 a2=3 a3=1 items=0 ppid=1 pid=5424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:32:12.480000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:32:12.490294 kernel: audit: type=1327 audit(1746664332.480:561): proctitle=737368643A20636F7265205B707269765D May 8 00:32:12.489000 audit[5424]: USER_START pid=5424 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.490000 audit[5427]: CRED_ACQ pid=5427 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.495536 kernel: audit: type=1105 audit(1746664332.489:562): pid=5424 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.495582 kernel: audit: type=1103 audit(1746664332.490:563): pid=5427 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.509153 kubelet[2224]: I0508 00:32:12.509106 2224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:32:12.545000 audit[5436]: NETFILTER_CFG table=filter:123 family=2 entries=8 op=nft_register_rule pid=5436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:32:12.545000 audit[5436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd9359260 a2=0 a3=1 items=0 ppid=2386 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:32:12.551852 kernel: audit: type=1325 audit(1746664332.545:564): table=filter:123 family=2 entries=8 op=nft_register_rule pid=5436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:32:12.551917 kernel: audit: type=1300 audit(1746664332.545:564): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd9359260 a2=0 a3=1 items=0 ppid=2386 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:32:12.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:32:12.555000 audit[5436]: NETFILTER_CFG table=nat:124 family=2 entries=58 op=nft_register_chain pid=5436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:32:12.555000 audit[5436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20452 a0=3 a1=ffffd9359260 a2=0 a3=1 items=0 ppid=2386 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:32:12.555000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:32:12.617209 sshd[5424]: pam_unix(sshd:session): session closed for user core May 8 00:32:12.616000 audit[5424]: USER_END pid=5424 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.617000 audit[5424]: CRED_DISP pid=5424 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:32:12.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.15:22-10.0.0.1:39360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:32:12.620862 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:39360.service: Deactivated successfully. May 8 00:32:12.622175 systemd-logind[1297]: Session 24 logged out. Waiting for processes to exit. May 8 00:32:12.622232 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:32:12.623101 systemd-logind[1297]: Removed session 24.